From nobody Fri Dec 19 16:59:30 2025 Received: from mail-io1-f45.google.com (mail-io1-f45.google.com [209.85.166.45]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D3F827E0F0 for ; Mon, 1 Jul 2024 08:48:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.45 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1719823691; cv=none; b=sKe6SbboN0uleNRNmJVByHIsKmIgo8RR4dI69563ObTROfcAR+1ewhJhwKJN6k7ES6JND5pYPGHahGpvkjrIXe2N/gY9HNQpLIV/tU4Wgo97pZJAeRDEe49eYkLeXlO0OZchWFpUxfy8dzo5o2vVs14g1HnG8Js4mS/XOPMiVzA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1719823691; c=relaxed/simple; bh=57pyErQ/lQI1jaoNtZJZVniKMVd85FM8A2zYAkEXh6Y=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=S1QOEQ7cgYuDW4YGbV5DnIjsMPMdX5lYtKFbkj/VYAmr1QHubZC8WNitDoCaP9MBFzCl2veeQA/aktH5yEMnhFJ4hzXKEqGMighZ9D4TXJ3/ZVIPGQwYrqowNGV2QXdHJTBqStCkbXEAF/lAQJbn7C592Q4EmjqYMVKCXJzwdNc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=AMg7PV1w; arc=none smtp.client-ip=209.85.166.45 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="AMg7PV1w" Received: by mail-io1-f45.google.com with SMTP id ca18e2360f4ac-7f4cfed0fddso7155639f.0 for ; Mon, 01 Jul 2024 01:48:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1719823689; x=1720428489; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=/FEjQRmnZwv5IniDvqPInjdkphsULH24MtgWxf3WNnc=; b=AMg7PV1w9+lAmBI+y79TWPN+u3lrH3FaJjoAboRoHBPmxnTIVMrFjNs/tQLJpbGX0X WSV5wzroVVlQvUFRiyrGtPr/YJD7SQvoMob3xeWIOkcpQz7AgDumI9//XMSwt/OA5Fyr K+uCdQ9aZg1Y8TbOQVfGbiFwzOBQgLt/+7gdoIeuJc9AwN3adxMtUnNOkHVIJxMXPOyo 9mnU4Ry9V0HIe4kVLIVBDzsd2Lrbb1VwGjTbVMGjBL9wNSTSIgy9FgAc5clXWfGm7sX1 iHbpHT4xa6hd9zdP/2+h06zfsUT+HFrEX25/A2kWyHOofFkKCBJvANSJ2krmp/Zder4N b+eg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1719823689; x=1720428489; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/FEjQRmnZwv5IniDvqPInjdkphsULH24MtgWxf3WNnc=; b=dXSDKgIwDperOv+sFCv9Un5M08giGKuv+o6ZZWJ3dzeqMr7TxqBGt8K5g9jIZ/TNnU sl5Kfk5s3qQfmO4tNpy7uD15Bm5X6c2+pE/NtcUozJ/5sM86U7nJmnLtUYQoR/nNIeoX 79i3tbU7wsUtqlqLxecmKtayJc5McJzeRd7EUX6t4jsszUC+Gc0Vp7og6ZhE+wGSGEd5 togRLz/Yv6grCun5mUwtSIwAqzHbONpV8KSwGyOg0QTFFKdf09An2bxmnfnQJ2IZp88f ioc8IQ76HFveWNKmyNvNflVRTeG3nt7a0jVU885tWiwAdDI4VgpIBjrt8TRfPIJnNB6J DLJQ== X-Forwarded-Encrypted: i=1; AJvYcCVMhX1FiK3LKpuWKDOqdpDcL5yOfmLBzug9NEJ0iWgG2BpRviqIRmKYvzQ7E9QP9MiMAk/5o6fuv5C1qHff5IWi6DoIl6mJ3lja63qW X-Gm-Message-State: AOJu0Yy8s5Nzsu6s+fo1tCN3sDvEnB3pjv/9hD8iAJmq4ebBjVUB1Hk+ ZnKCYptNKfcsiSqME+v/RU07WELxqxZFrjSJl1/r7Nmy7BrfryIQnXh94E8t+TU= X-Google-Smtp-Source: AGHT+IFNREJpAukQmSF+e3y8DudZ+fBumzStMqKfWS9xBvY6s+QthbSABnLkfofffgxBCHUpAfSNuw== X-Received: by 2002:a5d:9b8e:0:b0:7f3:a20e:c38f with SMTP id ca18e2360f4ac-7f62ed4cf57mr403632739f.0.1719823688743; Mon, 01 Jul 2024 01:48:08 -0700 (PDT) Received: from C02DW0BEMD6R.bytedance.net ([139.177.225.241]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-70804a7e7e0sm5932374b3a.204.2024.07.01.01.48.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 Jul 2024 01:48:08 -0700 (PDT) From: Qi Zheng To: david@redhat.com, hughd@google.com, willy@infradead.org, mgorman@suse.de, muchun.song@linux.dev, akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Qi Zheng Subject: [RFC PATCH 1/7] mm: pgtable: make pte_offset_map_nolock() return pmdval Date: Mon, 1 Jul 2024 16:46:42 +0800 Message-Id: <7f5233f9f612c7f58abf218852fb1042d764940b.1719570849.git.zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Make pte_offset_map_nolock() return pmdval so that we can recheck the *pmd once the lock is taken. This is a preparation for freeing empty PTE pages, no functional changes are expected. Signed-off-by: Qi Zheng --- Documentation/mm/split_page_table_lock.rst | 3 ++- arch/arm/mm/fault-armv.c | 2 +- arch/powerpc/mm/pgtable.c | 2 +- include/linux/mm.h | 4 ++-- mm/filemap.c | 2 +- mm/khugepaged.c | 4 ++-- mm/memory.c | 4 ++-- mm/mremap.c | 2 +- mm/page_vma_mapped.c | 2 +- mm/pgtable-generic.c | 21 ++++++++++++--------- mm/userfaultfd.c | 4 ++-- mm/vmscan.c | 2 +- 12 files changed, 28 insertions(+), 24 deletions(-) diff --git a/Documentation/mm/split_page_table_lock.rst b/Documentation/mm/= split_page_table_lock.rst index e4f6972eb6c0..e6a47d57531c 100644 --- a/Documentation/mm/split_page_table_lock.rst +++ b/Documentation/mm/split_page_table_lock.rst @@ -18,7 +18,8 @@ There are helpers to lock/unlock a table and other access= or functions: pointer to its PTE table lock, or returns NULL if no PTE table; - pte_offset_map_nolock() maps PTE, returns pointer to PTE with pointer to its PTE table - lock (not taken), or returns NULL if no PTE table; + lock (not taken) and the value of its pmd entry, or returns NULL + if no PTE table; - pte_offset_map() maps PTE, returns pointer to PTE, or returns NULL if no PTE table; - pte_unmap() diff --git a/arch/arm/mm/fault-armv.c b/arch/arm/mm/fault-armv.c index 2286c2ea60ec..3e4ed99b9330 100644 --- a/arch/arm/mm/fault-armv.c +++ b/arch/arm/mm/fault-armv.c @@ -117,7 +117,7 @@ static int adjust_pte(struct vm_area_struct *vma, unsig= ned long address, * must use the nested version. This also means we need to * open-code the spin-locking. */ - pte =3D pte_offset_map_nolock(vma->vm_mm, pmd, address, &ptl); + pte =3D pte_offset_map_nolock(vma->vm_mm, pmd, NULL, address, &ptl); if (!pte) return 0; =20 diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c index 9e7ba9c3851f..ab0250f1b226 100644 --- a/arch/powerpc/mm/pgtable.c +++ b/arch/powerpc/mm/pgtable.c @@ -350,7 +350,7 @@ void assert_pte_locked(struct mm_struct *mm, unsigned l= ong addr) */ if (pmd_none(*pmd)) return; - pte =3D pte_offset_map_nolock(mm, pmd, addr, &ptl); + pte =3D pte_offset_map_nolock(mm, pmd, NULL, addr, &ptl); BUG_ON(!pte); assert_spin_locked(ptl); pte_unmap(pte); diff --git a/include/linux/mm.h b/include/linux/mm.h index 7d044e737dba..396bdc3b3726 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2979,8 +2979,8 @@ static inline pte_t *pte_offset_map_lock(struct mm_st= ruct *mm, pmd_t *pmd, return pte; } =20 -pte_t *pte_offset_map_nolock(struct mm_struct *mm, pmd_t *pmd, - unsigned long addr, spinlock_t **ptlp); +pte_t *pte_offset_map_nolock(struct mm_struct *mm, pmd_t *pmd, pmd_t *pmdv= alp, + unsigned long addr, spinlock_t **ptlp); =20 #define pte_unmap_unlock(pte, ptl) do { \ spin_unlock(ptl); \ diff --git a/mm/filemap.c b/mm/filemap.c index 6835977ee99a..35bbba960447 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -3231,7 +3231,7 @@ static vm_fault_t filemap_fault_recheck_pte_none(stru= ct vm_fault *vmf) if (!(vmf->flags & FAULT_FLAG_ORIG_PTE_VALID)) return 0; =20 - ptep =3D pte_offset_map_nolock(vma->vm_mm, vmf->pmd, vmf->address, + ptep =3D pte_offset_map_nolock(vma->vm_mm, vmf->pmd, NULL, vmf->address, &vmf->ptl); if (unlikely(!ptep)) return VM_FAULT_NOPAGE; diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 2e017585f813..7b7c858d5f99 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -989,7 +989,7 @@ static int __collapse_huge_page_swapin(struct mm_struct= *mm, }; =20 if (!pte++) { - pte =3D pte_offset_map_nolock(mm, pmd, address, &ptl); + pte =3D pte_offset_map_nolock(mm, pmd, NULL, address, &ptl); if (!pte) { mmap_read_unlock(mm); result =3D SCAN_PMD_NULL; @@ -1578,7 +1578,7 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, uns= igned long addr, if (userfaultfd_armed(vma) && !(vma->vm_flags & VM_SHARED)) pml =3D pmd_lock(mm, pmd); =20 - start_pte =3D pte_offset_map_nolock(mm, pmd, haddr, &ptl); + start_pte =3D pte_offset_map_nolock(mm, pmd, NULL, haddr, &ptl); if (!start_pte) /* mmap_lock + page lock should prevent this */ goto abort; if (!pml) diff --git a/mm/memory.c b/mm/memory.c index 0a769f34bbb2..1c9068b0b067 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1108,7 +1108,7 @@ copy_pte_range(struct vm_area_struct *dst_vma, struct= vm_area_struct *src_vma, ret =3D -ENOMEM; goto out; } - src_pte =3D pte_offset_map_nolock(src_mm, src_pmd, addr, &src_ptl); + src_pte =3D pte_offset_map_nolock(src_mm, src_pmd, NULL, addr, &src_ptl); if (!src_pte) { pte_unmap_unlock(dst_pte, dst_ptl); /* ret =3D=3D 0 */ @@ -5507,7 +5507,7 @@ static vm_fault_t handle_pte_fault(struct vm_fault *v= mf) * it into a huge pmd: just retry later if so. */ vmf->pte =3D pte_offset_map_nolock(vmf->vma->vm_mm, vmf->pmd, - vmf->address, &vmf->ptl); + NULL, vmf->address, &vmf->ptl); if (unlikely(!vmf->pte)) return 0; vmf->orig_pte =3D ptep_get_lockless(vmf->pte); diff --git a/mm/mremap.c b/mm/mremap.c index e7ae140fc640..f672d0218a6f 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -175,7 +175,7 @@ static int move_ptes(struct vm_area_struct *vma, pmd_t = *old_pmd, err =3D -EAGAIN; goto out; } - new_pte =3D pte_offset_map_nolock(mm, new_pmd, new_addr, &new_ptl); + new_pte =3D pte_offset_map_nolock(mm, new_pmd, NULL, new_addr, &new_ptl); if (!new_pte) { pte_unmap_unlock(old_pte, old_ptl); err =3D -EAGAIN; diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c index ae5cc42aa208..507701b7bcc1 100644 --- a/mm/page_vma_mapped.c +++ b/mm/page_vma_mapped.c @@ -33,7 +33,7 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw, sp= inlock_t **ptlp) * Though, in most cases, page lock already protects this. */ pvmw->pte =3D pte_offset_map_nolock(pvmw->vma->vm_mm, pvmw->pmd, - pvmw->address, ptlp); + NULL, pvmw->address, ptlp); if (!pvmw->pte) return false; =20 diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c index a78a4adf711a..443e3b34434a 100644 --- a/mm/pgtable-generic.c +++ b/mm/pgtable-generic.c @@ -305,7 +305,7 @@ pte_t *__pte_offset_map(pmd_t *pmd, unsigned long addr,= pmd_t *pmdvalp) return NULL; } =20 -pte_t *pte_offset_map_nolock(struct mm_struct *mm, pmd_t *pmd, +pte_t *pte_offset_map_nolock(struct mm_struct *mm, pmd_t *pmd, pmd_t *pmdv= alp, unsigned long addr, spinlock_t **ptlp) { pmd_t pmdval; @@ -314,6 +314,8 @@ pte_t *pte_offset_map_nolock(struct mm_struct *mm, pmd_= t *pmd, pte =3D __pte_offset_map(pmd, addr, &pmdval); if (likely(pte)) *ptlp =3D pte_lockptr(mm, &pmdval); + if (pmdvalp) + *pmdvalp =3D pmdval; return pte; } =20 @@ -347,14 +349,15 @@ pte_t *pte_offset_map_nolock(struct mm_struct *mm, pm= d_t *pmd, * and disconnected table. Until pte_unmap(pte) unmaps and rcu_read_unloc= k()s * afterwards. * - * pte_offset_map_nolock(mm, pmd, addr, ptlp), above, is like pte_offset_m= ap(); - * but when successful, it also outputs a pointer to the spinlock in ptlp = - as - * pte_offset_map_lock() does, but in this case without locking it. This = helps - * the caller to avoid a later pte_lockptr(mm, *pmd), which might by that = time - * act on a changed *pmd: pte_offset_map_nolock() provides the correct spi= nlock - * pointer for the page table that it returns. In principle, the caller s= hould - * recheck *pmd once the lock is taken; in practice, no callsite needs tha= t - - * either the mmap_lock for write, or pte_same() check on contents, is eno= ugh. + * pte_offset_map_nolock(mm, pmd, pmdvalp, addr, ptlp), above, is like + * pte_offset_map(); but when successful, it also outputs a pointer to the + * spinlock in ptlp - as pte_offset_map_lock() does, but in this case with= out + * locking it. This helps the caller to avoid a later pte_lockptr(mm, *pm= d), + * which might by that time act on a changed *pmd: pte_offset_map_nolock() + * provides the correct spinlock pointer for the page table that it return= s. + * In principle, the caller should recheck *pmd once the lock is taken; Bu= t in + * most cases, either the mmap_lock for write, or pte_same() check on cont= ents, + * is enough. * * Note that free_pgtables(), used after unmapping detached vmas, or when * exiting the whole mm, does not take page table lock before freeing a pa= ge diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 8dedaec00486..61c1d228d239 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -1143,7 +1143,7 @@ static int move_pages_pte(struct mm_struct *mm, pmd_t= *dst_pmd, pmd_t *src_pmd, src_addr, src_addr + PAGE_SIZE); mmu_notifier_invalidate_range_start(&range); retry: - dst_pte =3D pte_offset_map_nolock(mm, dst_pmd, dst_addr, &dst_ptl); + dst_pte =3D pte_offset_map_nolock(mm, dst_pmd, NULL, dst_addr, &dst_ptl); =20 /* Retry if a huge pmd materialized from under us */ if (unlikely(!dst_pte)) { @@ -1151,7 +1151,7 @@ static int move_pages_pte(struct mm_struct *mm, pmd_t= *dst_pmd, pmd_t *src_pmd, goto out; } =20 - src_pte =3D pte_offset_map_nolock(mm, src_pmd, src_addr, &src_ptl); + src_pte =3D pte_offset_map_nolock(mm, src_pmd, NULL, src_addr, &src_ptl); =20 /* * We held the mmap_lock for reading so MADV_DONTNEED diff --git a/mm/vmscan.c b/mm/vmscan.c index 3d4c681c6d40..c9a4cd31e6b4 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -3373,7 +3373,7 @@ static bool walk_pte_range(pmd_t *pmd, unsigned long = start, unsigned long end, DEFINE_MAX_SEQ(walk->lruvec); int old_gen, new_gen =3D lru_gen_from_seq(max_seq); =20 - pte =3D pte_offset_map_nolock(args->mm, pmd, start & PMD_MASK, &ptl); + pte =3D pte_offset_map_nolock(args->mm, pmd, NULL, start & PMD_MASK, &ptl= ); if (!pte) return false; if (!spin_trylock(ptl)) { --=20 2.20.1 From nobody Fri Dec 19 16:59:30 2025 Received: from mail-oa1-f52.google.com (mail-oa1-f52.google.com [209.85.160.52]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0B818129A74 for ; Mon, 1 Jul 2024 08:48:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.52 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1719823695; cv=none; b=kEL9XZ8y9y+FzR7zFGpZjfwkNtRQ9g5yfWFceTRLi+/pb+SRy2Utu7XowyK4jQnnO9NeFSAf6NDd8m9PJYNnLilDxpqw9cbP0K9WIZFexK1OjGkUQGft5hPD+ERxIJiQ+lkyHf9IdegIb6M2Sy9ig9fEzWQHI3UeMTIKaqZeAQw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1719823695; c=relaxed/simple; bh=QvFNcVsXTduY8ppifiB2ogsTym+Y10v0+psU6b4cCE4=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=PMahWMqU8L5TLK4F2QGG+hz+3gVgvdhxEL/cgmPIV5z/QeaFBxF4OCWknZQ+rlcM8n420uGBLieJSs9KWQC8HWW11nNNyUNoNZIDIwUA1Our+LtC1CCap+e2DWhNd3ifkUlszOtosIMQN5mmBiXdVXVuTWpj2ldObgjGr6kTkAc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=UUQtGgoB; arc=none smtp.client-ip=209.85.160.52 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="UUQtGgoB" Received: by mail-oa1-f52.google.com with SMTP id 586e51a60fabf-25ce35c52e7so325952fac.2 for ; Mon, 01 Jul 2024 01:48:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1719823693; x=1720428493; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=rTUbQcnas0bLANV3VeS/cF6tFOZwLY6gcsxQegoCBbA=; b=UUQtGgoBiOfZdd267M+d4Pi8gMyNxXe1khQMQ+QYXvhTJ5rXdKzRIwT/Lm3nQFxXOI ik7kArbsNKWKRHp0G5jr212CZfQppC93H8qvEF7bjBuUo8Yj4QmZ6/uINEpEBL4nLlrH drndC1E1uzLrHoGQchRmkaLUICD4MXHO6Q2hnNhh1n02jkG5WSnKbpF755Mv8SGKrl2o 9qGBk+8ZkKCs5Hg3EQU/TPg7FqALW5i6QzpwL+Wz1nTt1CL0/0UL+cWQysw7/9IqL2Gk 9FgpwREC3SfeVwQ42AOSUF8FKvsN0CKLy6ItcSSr925NZWXos1dToVU3a/Q5fdjj1TRc wMCw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1719823693; x=1720428493; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=rTUbQcnas0bLANV3VeS/cF6tFOZwLY6gcsxQegoCBbA=; b=UdhNm5m5Kqk2G3NAPO2adUPkKDPTktLqhb1rANYiVTOdTDzzvcsKIs0yttg4pn6hCR +rBZr/iw2r6QkEQHct1zmJ0xwbcV3HDF280QXMZnLf0qn2vmCkebrk68pI+X1BN9WPiW k1DZcO8yVQ+MOsw6MJ80xOjCiw9ftC5Oyj65Ak3oR9b5X8Gcgu3nI5oBArGi6dk/sTpR kNvUbSp4gFJs9s4Jbo96XKqmPTxSMYt5BTGTcg5E99DjLAKtDzqsIWR3N2WEH1x8Tre0 I0L1lFTLFr/eeUZEhY7yZ5u9N08xmyk6akbVBfta11RIBYjM4ZPWMDPZ5o34vLQAUjzL 5+Sg== X-Forwarded-Encrypted: i=1; AJvYcCUj7vKHhngngeSyOOcGbYbcFfgrWYLHIiKRyjG1YQciW7UyWc0D61LbN83WNlUcflZ7DckSrA+Pcil+wXrlU0JET5Ecc4OpWX03wVom X-Gm-Message-State: AOJu0YwTzWwtpfqGH/a27HtKqRqtYc1g+bYfLtqYdP8HhzLsfRJyMHcK 20k0lduImOXDERjkHMcHpeM3ayLjsQoXKHGeYKWDvY7Lx9/HSPyv6u+JnY+4Pnw= X-Google-Smtp-Source: AGHT+IEOYbpAWZ9+Meh4EfekAHEexcqmS8rjkaDrl5TI+CElE2CSv6vBFMWBxumSSWmDWCJBhTLlmA== X-Received: by 2002:a05:6870:a511:b0:259:f03c:4e90 with SMTP id 586e51a60fabf-25db36a52edmr5197533fac.4.1719823693115; Mon, 01 Jul 2024 01:48:13 -0700 (PDT) Received: from C02DW0BEMD6R.bytedance.net ([139.177.225.241]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-70804a7e7e0sm5932374b3a.204.2024.07.01.01.48.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 Jul 2024 01:48:12 -0700 (PDT) From: Qi Zheng To: david@redhat.com, hughd@google.com, willy@infradead.org, mgorman@suse.de, muchun.song@linux.dev, akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Qi Zheng Subject: [RFC PATCH 2/7] mm: introduce CONFIG_PT_RECLAIM Date: Mon, 1 Jul 2024 16:46:43 +0800 Message-Id: <58942ecf91fea0a62307e5ab848228142a1270ac.1719570849.git.zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" This configuration variable will be used to build the code needed to free empty user page table pages. This feature is not available on all architectures yet, so ARCH_SUPPORTS_PT_RECLAIM is needed. We can remove it once all architectures support this feature. Signed-off-by: Qi Zheng --- mm/Kconfig | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/mm/Kconfig b/mm/Kconfig index 991fa9cf6137..7e2c87784d86 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -1256,6 +1256,20 @@ config IOMMU_MM_DATA config EXECMEM bool =20 +config ARCH_SUPPORTS_PT_RECLAIM + def_bool n + +config PT_RECLAIM + bool "reclaim empty user page table pages" + default y + depends on ARCH_SUPPORTS_PT_RECLAIM && MMU && SMP + select MMU_GATHER_RCU_TABLE_FREE + help + Try to reclaim empty user page table pages in paths other that munmap + and exit_mmap path. + + Note: now only empty user PTE page table pages will be reclaimed. + source "mm/damon/Kconfig" =20 endmenu --=20 2.20.1 From nobody Fri Dec 19 16:59:30 2025 Received: from mail-oa1-f43.google.com (mail-oa1-f43.google.com [209.85.160.43]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6160F1422D6 for ; Mon, 1 Jul 2024 08:48:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.43 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1719823700; cv=none; b=DkvmpJCRalpR6P77D4WpzHGZcbrjXsU7jaQbCdrlogq1thouH578uW6p2nFd1mTfo0y0m0hf6G26WbsN37IwPQhehVd1b6ZnD7skU1hojLslO4acbWYwe86LRDoEARMmbLCzAc+QuflBEV5UpyqPztpY8poiSkmyx4Sd1Ocl+iU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1719823700; c=relaxed/simple; bh=1Em7AXJPWdgt61EPZ6VQJawCRR1SjB0ps0MYBWKKCVU=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=kDnWXr4tKMCU6eJQpur/y/oZL7XQjc/y7Lx8fnEyJuPU/RbVhBu5AsjO3wmAxYn7FDFAgrSZoptzg6Ah+0uT5WaseJ6eOOuZ7g2Xcj4fk6CP0AAye06Hbg/ETyJmbFFnBfP5tmiJDfw9KsEkFo3LIjCNrSeVmzRhgBcQt7y2FO8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=Hw2M6CIu; arc=none smtp.client-ip=209.85.160.43 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="Hw2M6CIu" Received: by mail-oa1-f43.google.com with SMTP id 586e51a60fabf-25075f3f472so403766fac.2 for ; Mon, 01 Jul 2024 01:48:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1719823697; x=1720428497; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=heRFBrzmiSFbf80Ldx88bwYUpXRT+UgzgGRtnFqgbfo=; b=Hw2M6CIuH/gjWNhW8sDj7rX6IG3i2PuKysulRDbkhBqXTsScNnAlcttxMjo0UCptKU is+vUndLOhQJa3I7k6IuxH+9E/3ZfvxPcI0rwEhQejqQWSoRMX4LDapdOCE59/5JvNeB jfDL7ow2dXFBUft3e45WHopqA4l7iOo5wmSllTIsMEI0k/0IRvFqCncoAZ742oOEVXg4 86QuVcEOcM0jVwdEd3k6lMGl+Wd7Z8IQlxAjLzb8W9I+W5LinzyYOQE26yhIibH3egux ryPxeEZed3EnCgNy6GgL9MxbiBwWB4Jx4MDzcIGu1Htv3eC2kUXLtoOOIaHyAU0u/kCb LZ3Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1719823697; x=1720428497; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=heRFBrzmiSFbf80Ldx88bwYUpXRT+UgzgGRtnFqgbfo=; b=DlgnCmPpuFxue/8vnXSVb4OOtVRKfE1Kc7JOEYugPC4WFNwtBH/YWg2tB/RYbBN3T3 wPv4X1qzqpXmINA6Ec8EgBA3P5FThZhIlVqB8ebVqbJWHvgpoZvicHLGAQDm7HITQZ3a yLnRH/+LWoKibvX9/XpcmjGimzlmiM7l5RGEBfNVl+MAEpcIIdSUQYXbNRHmNAXL0lrC CsNuFgzRIaD+jY2tBQcc9oOBGZLZ8DWbvWpdd+YlcUF8i8Na9qEUFqJY90IdHjGcEhUJ XDsRS6p77MTf1SMtJa85klsyKQbWsv4aXB3j+voQRKV2QV/2GIhdZnvCkaeLLc4busNI Z+bg== X-Forwarded-Encrypted: i=1; AJvYcCWUG6oypSVCgZq4jcN5zX6+7CHlMVwbHBAIM1+Hp0Xt9t3owVIk+sfiglSr4LWciI9ASsDNvf9h37i7/6abfjN3cE5rTssN0YVqvkyE X-Gm-Message-State: AOJu0YyOqCPTR3tu7f1EA8JcsKluVzCgZt2Z5M81/PJTZgMsG2uhAY+4 iCtvUpi01p/F3bs3eamHT7mK4y97pOmnOM55MES5uis56hSGEUIffdWMF/DReVM= X-Google-Smtp-Source: AGHT+IFFB2WixvwXraavN9qiwzLbm4r9XSaP2jQhzM0/LqY8bBCt0wrI3ZybaGE0lHA7u/jpjEN+uQ== X-Received: by 2002:a05:6870:3751:b0:254:affe:5a08 with SMTP id 586e51a60fabf-25db340a2d2mr4729423fac.2.1719823697112; Mon, 01 Jul 2024 01:48:17 -0700 (PDT) Received: from C02DW0BEMD6R.bytedance.net ([139.177.225.241]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-70804a7e7e0sm5932374b3a.204.2024.07.01.01.48.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 Jul 2024 01:48:16 -0700 (PDT) From: Qi Zheng To: david@redhat.com, hughd@google.com, willy@infradead.org, mgorman@suse.de, muchun.song@linux.dev, akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Qi Zheng Subject: [RFC PATCH 3/7] mm: pass address information to pmd_install() Date: Mon, 1 Jul 2024 16:46:44 +0800 Message-Id: X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In the subsequent implementation of freeing empty page table pages, we need the address information to flush tlb, so pass address to pmd_install() in advance. No functional changes. Signed-off-by: Qi Zheng --- include/linux/hugetlb.h | 2 +- include/linux/mm.h | 9 +++++---- mm/debug_vm_pgtable.c | 2 +- mm/filemap.c | 2 +- mm/gup.c | 2 +- mm/internal.h | 3 ++- mm/memory.c | 15 ++++++++------- mm/migrate_device.c | 2 +- mm/mprotect.c | 8 ++++---- mm/mremap.c | 2 +- mm/userfaultfd.c | 6 +++--- 11 files changed, 28 insertions(+), 25 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index a951c0d06061..55715eb5cb34 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -198,7 +198,7 @@ static inline pte_t *pte_offset_huge(pmd_t *pmd, unsign= ed long address) static inline pte_t *pte_alloc_huge(struct mm_struct *mm, pmd_t *pmd, unsigned long address) { - return pte_alloc(mm, pmd) ? NULL : pte_offset_huge(pmd, address); + return pte_alloc(mm, pmd, address) ? NULL : pte_offset_huge(pmd, address); } #endif =20 diff --git a/include/linux/mm.h b/include/linux/mm.h index 396bdc3b3726..880100a8b472 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2800,7 +2800,7 @@ static inline void mm_inc_nr_ptes(struct mm_struct *m= m) {} static inline void mm_dec_nr_ptes(struct mm_struct *mm) {} #endif =20 -int __pte_alloc(struct mm_struct *mm, pmd_t *pmd); +int __pte_alloc(struct mm_struct *mm, pmd_t *pmd, unsigned long addr); int __pte_alloc_kernel(pmd_t *pmd); =20 #if defined(CONFIG_MMU) @@ -2987,13 +2987,14 @@ pte_t *pte_offset_map_nolock(struct mm_struct *mm, = pmd_t *pmd, pmd_t *pmdvalp, pte_unmap(pte); \ } while (0) =20 -#define pte_alloc(mm, pmd) (unlikely(pmd_none(*(pmd))) && __pte_alloc(mm, = pmd)) +#define pte_alloc(mm, pmd, addr) \ + (unlikely(pmd_none(*(pmd))) && __pte_alloc(mm, pmd, addr)) =20 #define pte_alloc_map(mm, pmd, address) \ - (pte_alloc(mm, pmd) ? NULL : pte_offset_map(pmd, address)) + (pte_alloc(mm, pmd, address) ? NULL : pte_offset_map(pmd, address)) =20 #define pte_alloc_map_lock(mm, pmd, address, ptlp) \ - (pte_alloc(mm, pmd) ? \ + (pte_alloc(mm, pmd, address) ? \ NULL : pte_offset_map_lock(mm, pmd, address, ptlp)) =20 #define pte_alloc_kernel(pmd, address) \ diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c index e4969fb54da3..18375744e184 100644 --- a/mm/debug_vm_pgtable.c +++ b/mm/debug_vm_pgtable.c @@ -1246,7 +1246,7 @@ static int __init init_args(struct pgtable_debug_args= *args) args->start_pmdp =3D pmd_offset(args->pudp, 0UL); WARN_ON(!args->start_pmdp); =20 - if (pte_alloc(args->mm, args->pmdp)) { + if (pte_alloc(args->mm, args->pmdp, args->vaddr)) { pr_err("Failed to allocate pte entries\n"); ret =3D -ENOMEM; goto error; diff --git a/mm/filemap.c b/mm/filemap.c index 35bbba960447..d8b936d87eb4 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -3453,7 +3453,7 @@ static bool filemap_map_pmd(struct vm_fault *vmf, str= uct folio *folio, } =20 if (pmd_none(*vmf->pmd) && vmf->prealloc_pte) - pmd_install(mm, vmf->pmd, &vmf->prealloc_pte); + pmd_install(mm, vmf->pmd, vmf->address, &vmf->prealloc_pte); =20 return false; } diff --git a/mm/gup.c b/mm/gup.c index 8bea9ad80984..b87b1ea9d008 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1105,7 +1105,7 @@ static struct page *follow_pmd_mask(struct vm_area_st= ruct *vma, spin_unlock(ptl); split_huge_pmd(vma, pmd, address); /* If pmd was left empty, stuff a page table in there quickly */ - return pte_alloc(mm, pmd) ? ERR_PTR(-ENOMEM) : + return pte_alloc(mm, pmd, address) ? ERR_PTR(-ENOMEM) : follow_page_pte(vma, address, pmd, flags, &ctx->pgmap); } page =3D follow_huge_pmd(vma, address, pmd, flags, ctx); diff --git a/mm/internal.h b/mm/internal.h index 2ea9a88dcb95..1dfdad110a9a 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -320,7 +320,8 @@ void folio_activate(struct folio *folio); void free_pgtables(struct mmu_gather *tlb, struct ma_state *mas, struct vm_area_struct *start_vma, unsigned long floor, unsigned long ceiling, bool mm_wr_locked); -void pmd_install(struct mm_struct *mm, pmd_t *pmd, pgtable_t *pte); +void pmd_install(struct mm_struct *mm, pmd_t *pmd, unsigned long addr, + pgtable_t *pte); =20 struct zap_details; void unmap_page_range(struct mmu_gather *tlb, diff --git a/mm/memory.c b/mm/memory.c index 1c9068b0b067..09db2c97cc5c 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -417,7 +417,8 @@ void free_pgtables(struct mmu_gather *tlb, struct ma_st= ate *mas, } while (vma); } =20 -void pmd_install(struct mm_struct *mm, pmd_t *pmd, pgtable_t *pte) +void pmd_install(struct mm_struct *mm, pmd_t *pmd, unsigned long addr, + pgtable_t *pte) { spinlock_t *ptl =3D pmd_lock(mm, pmd); =20 @@ -443,13 +444,13 @@ void pmd_install(struct mm_struct *mm, pmd_t *pmd, pg= table_t *pte) spin_unlock(ptl); } =20 -int __pte_alloc(struct mm_struct *mm, pmd_t *pmd) +int __pte_alloc(struct mm_struct *mm, pmd_t *pmd, unsigned long addr) { pgtable_t new =3D pte_alloc_one(mm); if (!new) return -ENOMEM; =20 - pmd_install(mm, pmd, &new); + pmd_install(mm, pmd, addr, &new); if (new) pte_free(mm, new); return 0; @@ -2115,7 +2116,7 @@ static int insert_pages(struct vm_area_struct *vma, u= nsigned long addr, =20 /* Allocate the PTE if necessary; takes PMD lock once only. */ ret =3D -ENOMEM; - if (pte_alloc(mm, pmd)) + if (pte_alloc(mm, pmd, addr)) goto out; =20 while (pages_to_write_in_pmd) { @@ -4521,7 +4522,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *= vmf) * Use pte_alloc() instead of pte_alloc_map(), so that OOM can * be distinguished from a transient failure of pte_offset_map(). */ - if (pte_alloc(vma->vm_mm, vmf->pmd)) + if (pte_alloc(vma->vm_mm, vmf->pmd, vmf->address)) return VM_FAULT_OOM; =20 /* Use the zero-page for reads */ @@ -4868,8 +4869,8 @@ vm_fault_t finish_fault(struct vm_fault *vmf) } =20 if (vmf->prealloc_pte) - pmd_install(vma->vm_mm, vmf->pmd, &vmf->prealloc_pte); - else if (unlikely(pte_alloc(vma->vm_mm, vmf->pmd))) + pmd_install(vma->vm_mm, vmf->pmd, vmf->address, &vmf->prealloc_pte); + else if (unlikely(pte_alloc(vma->vm_mm, vmf->pmd, vmf->address))) return VM_FAULT_OOM; } =20 diff --git a/mm/migrate_device.c b/mm/migrate_device.c index 6d66dc1c6ffa..e4d2e19e6611 100644 --- a/mm/migrate_device.c +++ b/mm/migrate_device.c @@ -598,7 +598,7 @@ static void migrate_vma_insert_page(struct migrate_vma = *migrate, goto abort; if (pmd_trans_huge(*pmdp) || pmd_devmap(*pmdp)) goto abort; - if (pte_alloc(mm, pmdp)) + if (pte_alloc(mm, pmdp, addr)) goto abort; if (unlikely(anon_vma_prepare(vma))) goto abort; diff --git a/mm/mprotect.c b/mm/mprotect.c index 222ab434da54..1a1537ddffe4 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -330,11 +330,11 @@ pgtable_populate_needed(struct vm_area_struct *vma, u= nsigned long cp_flags) * allocation failures during page faults by kicking OOM and returning * error. */ -#define change_pmd_prepare(vma, pmd, cp_flags) \ +#define change_pmd_prepare(vma, pmd, addr, cp_flags) \ ({ \ long err =3D 0; \ if (unlikely(pgtable_populate_needed(vma, cp_flags))) { \ - if (pte_alloc(vma->vm_mm, pmd)) \ + if (pte_alloc(vma->vm_mm, pmd, addr)) \ err =3D -ENOMEM; \ } \ err; \ @@ -375,7 +375,7 @@ static inline long change_pmd_range(struct mmu_gather *= tlb, again: next =3D pmd_addr_end(addr, end); =20 - ret =3D change_pmd_prepare(vma, pmd, cp_flags); + ret =3D change_pmd_prepare(vma, pmd, addr, cp_flags); if (ret) { pages =3D ret; break; @@ -402,7 +402,7 @@ static inline long change_pmd_range(struct mmu_gather *= tlb, * cleared; make sure pmd populated if * necessary, then fall-through to pte level. */ - ret =3D change_pmd_prepare(vma, pmd, cp_flags); + ret =3D change_pmd_prepare(vma, pmd, addr, cp_flags); if (ret) { pages =3D ret; break; diff --git a/mm/mremap.c b/mm/mremap.c index f672d0218a6f..7723d11e77cd 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -628,7 +628,7 @@ unsigned long move_page_tables(struct vm_area_struct *v= ma, } if (pmd_none(*old_pmd)) continue; - if (pte_alloc(new_vma->vm_mm, new_pmd)) + if (pte_alloc(new_vma->vm_mm, new_pmd, new_addr)) break; if (move_ptes(vma, old_pmd, old_addr, old_addr + extent, new_vma, new_pmd, new_addr, need_rmap_locks) < 0) diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 61c1d228d239..e1674580b54f 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -796,7 +796,7 @@ static __always_inline ssize_t mfill_atomic(struct user= faultfd_ctx *ctx, break; } if (unlikely(pmd_none(dst_pmdval)) && - unlikely(__pte_alloc(dst_mm, dst_pmd))) { + unlikely(__pte_alloc(dst_mm, dst_pmd, dst_addr))) { err =3D -ENOMEM; break; } @@ -1713,13 +1713,13 @@ ssize_t move_pages(struct userfaultfd_ctx *ctx, uns= igned long dst_start, err =3D -ENOENT; break; } - if (unlikely(__pte_alloc(mm, src_pmd))) { + if (unlikely(__pte_alloc(mm, src_pmd, src_addr))) { err =3D -ENOMEM; break; } } =20 - if (unlikely(pte_alloc(mm, dst_pmd))) { + if (unlikely(pte_alloc(mm, dst_pmd, dst_addr))) { err =3D -ENOMEM; break; } --=20 2.20.1 From nobody Fri Dec 19 16:59:30 2025 Received: from mail-pf1-f169.google.com (mail-pf1-f169.google.com [209.85.210.169]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1938A12C544 for ; Mon, 1 Jul 2024 08:48:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.169 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1719823703; cv=none; b=FYWrm4Q+QxMMr0giMG93JbMkiXpW9c5abTiE3BCZWYBr2mMbwooMCjhVVTFdw6ntsKDFtRRXcTXlgSTxF3BagfymOl8kkodZ0Rjj1XinhaRDPhwdnFGSahxT8ItyjPtB1pkVB5YV7j9n4G7FlySoFz+8YUEH+YuQ+8s9MPiGdxg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1719823703; c=relaxed/simple; bh=betnwoFaHdTUaSy772c73rSTFV5TeDM3Z4D9bCnscSI=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=FFwSlf6j/yty84CEfdXrv4aA9K2uRTSzDTulFCrQ+kIiLXneDFrXrlxLVG7tutfzZFBnGpAMXbBy+h63yIFnJ6IQ4UTsj34AxvilKQ47YmRZJbVPCxBuAxq4SVGy0PWS/YUtXNQtdUxlDaJsB2jNkdN0PjhlGoI4vj3aZddtUCU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=RltasCJm; arc=none smtp.client-ip=209.85.210.169 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="RltasCJm" Received: by mail-pf1-f169.google.com with SMTP id d2e1a72fcca58-70685423a06so139516b3a.2 for ; Mon, 01 Jul 2024 01:48:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1719823701; x=1720428501; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=8iGTSPNawa0GagXOh9xE0UwlegG4b1kDfjCNd9R6avY=; b=RltasCJmTvWQfbAtzz2NW05/WDfbjeUQTBk5Qcl5sYkVn5FlkZ3U8/0ZkXxkZnpmJq GUW6y8MgmVzVEFhdbigGX+TqkmuMDVs0ArxJO7epY7d6hhq8n4BDXrnHb85Iu6Fd5QAj uNtMES0xg4eAz6fsN79oWTc8S+viO7O0Qf+iVYbRzscgLXDNwqjpUdgQjH7LJOvmxDus LerkDMHXialnIUw4afgOCmKr29P8HexIZnPZnoHC+zb8SvCs69sSvv4vMeEIhLChdSvQ tafMjCF90yLVVOJEj3WfdSb9dKF3gbJO1tmiY1kbYOmqg1DQ8Goi7qGBA+abOdL7tYDd 2J0A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1719823701; x=1720428501; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=8iGTSPNawa0GagXOh9xE0UwlegG4b1kDfjCNd9R6avY=; b=h32wsbuMIER0kPjcdNpagKovhcnVolxwzQWrQ0bYQvW8bkA3voRrSy71t3QpLGqLzA 39Ug2dnzY9RybMxCwl+SeXs0f2pmNdwk/OSxL952xpHKuGqBBcF73XaKhZy1TTIAz45y HQVqI9/ihPqqOur2ALqNCQ0Q4iOqk+/RK6HiMFIh2tmQVGsOuJlry+1faGHgYu7pIrAz Xi62SiPXuiciyP8hVyzHvAy3vIzX4vI1jO0bgz+DEAY5Opw7FNR1xhWI7WcDxJ6SVn6+ UpDERq2gdHiC6Gf/fhq50r/ScJhjnVqQZDvv3dS9TyN7oSaNK/M/yP7Df6FIjeR8vTrA vjSQ== X-Forwarded-Encrypted: i=1; AJvYcCXx3yLL5Pmg64XeHQo7EL3h8G5YRKqrqNZh8NOzhk+0+cMeousBRVFr2fz58BurDLZ1vwVVSnzx9dNYDV8yH3+ONTeWQ3nSNY5cR/mN X-Gm-Message-State: AOJu0YzMMJk1z2dRZWXb67RT3AEgOAG+4beaiHLepHljqSte3keeIlwY 7pMWKMt1Nn/uUSDGgQI2Eys5j0ImhR3Bnpm467IxVpRZqzTj+jsLJvNWU9lSfmU= X-Google-Smtp-Source: AGHT+IFls0oOjbzAtOgY83MipQfLTlG0nuWQRUmCut8mpMpDbM18C75DCE0fKwS+uZ7TURaRFkpcBQ== X-Received: by 2002:aa7:8ecc:0:b0:704:21c2:ae92 with SMTP id d2e1a72fcca58-70aaaefd3bamr5492602b3a.2.1719823701163; Mon, 01 Jul 2024 01:48:21 -0700 (PDT) Received: from C02DW0BEMD6R.bytedance.net ([139.177.225.241]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-70804a7e7e0sm5932374b3a.204.2024.07.01.01.48.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 Jul 2024 01:48:20 -0700 (PDT) From: Qi Zheng To: david@redhat.com, hughd@google.com, willy@infradead.org, mgorman@suse.de, muchun.song@linux.dev, akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Qi Zheng Subject: [RFC PATCH 4/7] mm: pgtable: try to reclaim empty PTE pages in zap_page_range_single() Date: Mon, 1 Jul 2024 16:46:45 +0800 Message-Id: <09a7b82e61bc87849ca6bde35f98345d109817e2.1719570849.git.zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Now in order to pursue high performance, applications mostly use some high-performance user-mode memory allocators, such as jemalloc or tcmalloc. These memory allocators use madvise(MADV_DONTNEED or MADV_FREE) to release physical memory, but neither MADV_DONTNEED nor MADV_FREE will release page table memory, which may cause huge page table memory usage. The following are a memory usage snapshot of one process which actually happened on our server: VIRT: 55t RES: 590g VmPTE: 110g In this case, most of the page table entries are empty. For such a PTE page where all entries are empty, we can actually free it back to the system for others to use. As a first step, this commit attempts to synchronously free the empty PTE pages in zap_page_range_single() (MADV_DONTNEED etc will invoke this). In order to reduce overhead, we only handle the cases with a high probability of generating empty PTE pages, and other cases will be filtered out, such as: - hugetlb vma (unsuitable) - userfaultfd_wp vma (may reinstall the pte entry) - writable private file mapping case (COW-ed anon page is not zapped) - etc For userfaultfd_wp and private file mapping cases (and MADV_FREE case, of course), consider scanning and freeing empty PTE pages asynchronously in the future. The following code snippet can show the effect of optimization: mmap 50G while (1) { for (; i < 1024 * 25; i++) { touch 2M memory madvise MADV_DONTNEED 2M } } As we can see, the memory usage of VmPTE is reduced: before after VIRT 50.0 GB 50.0 GB RES 3.1 MB 3.1 MB VmPTE 102640 KB 240 KB Signed-off-by: Qi Zheng --- include/linux/pgtable.h | 14 +++++ mm/Makefile | 1 + mm/huge_memory.c | 3 + mm/internal.h | 14 +++++ mm/khugepaged.c | 22 ++++++- mm/memory.c | 2 + mm/pt_reclaim.c | 131 ++++++++++++++++++++++++++++++++++++++++ 7 files changed, 186 insertions(+), 1 deletion(-) create mode 100644 mm/pt_reclaim.c diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index 2f32eaccf0b9..59e894f705a7 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -447,6 +447,20 @@ static inline void arch_check_zapped_pmd(struct vm_are= a_struct *vma, } #endif =20 +#ifndef arch_flush_tlb_before_set_huge_page +static inline void arch_flush_tlb_before_set_huge_page(struct mm_struct *m= m, + unsigned long addr) +{ +} +#endif + +#ifndef arch_flush_tlb_before_set_pte_page +static inline void arch_flush_tlb_before_set_pte_page(struct mm_struct *mm, + unsigned long addr) +{ +} +#endif + #ifndef __HAVE_ARCH_PTEP_GET_AND_CLEAR static inline pte_t ptep_get_and_clear(struct mm_struct *mm, unsigned long address, diff --git a/mm/Makefile b/mm/Makefile index d2915f8c9dc0..3cb3c1f5d090 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -141,3 +141,4 @@ obj-$(CONFIG_HAVE_BOOTMEM_INFO_NODE) +=3D bootmem_info.o obj-$(CONFIG_GENERIC_IOREMAP) +=3D ioremap.o obj-$(CONFIG_SHRINKER_DEBUG) +=3D shrinker_debug.o obj-$(CONFIG_EXECMEM) +=3D execmem.o +obj-$(CONFIG_PT_RECLAIM) +=3D pt_reclaim.o diff --git a/mm/huge_memory.c b/mm/huge_memory.c index c7ce28f6b7f3..444a1cdaf06d 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -977,6 +977,7 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct v= m_fault *vmf, folio_add_new_anon_rmap(folio, vma, haddr, RMAP_EXCLUSIVE); folio_add_lru_vma(folio, vma); pgtable_trans_huge_deposit(vma->vm_mm, vmf->pmd, pgtable); + arch_flush_tlb_before_set_huge_page(vma->vm_mm, haddr); set_pmd_at(vma->vm_mm, haddr, vmf->pmd, entry); update_mmu_cache_pmd(vma, vmf->address, vmf->pmd); add_mm_counter(vma->vm_mm, MM_ANONPAGES, HPAGE_PMD_NR); @@ -1044,6 +1045,7 @@ static void set_huge_zero_folio(pgtable_t pgtable, st= ruct mm_struct *mm, entry =3D mk_pmd(&zero_folio->page, vma->vm_page_prot); entry =3D pmd_mkhuge(entry); pgtable_trans_huge_deposit(mm, pmd, pgtable); + arch_flush_tlb_before_set_huge_page(mm, haddr); set_pmd_at(mm, haddr, pmd, entry); mm_inc_nr_ptes(mm); } @@ -1151,6 +1153,7 @@ static void insert_pfn_pmd(struct vm_area_struct *vma= , unsigned long addr, pgtable =3D NULL; } =20 + arch_flush_tlb_before_set_huge_page(mm, addr); set_pmd_at(mm, addr, pmd, entry); update_mmu_cache_pmd(vma, addr, pmd); =20 diff --git a/mm/internal.h b/mm/internal.h index 1dfdad110a9a..ac1fdd4681dc 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1579,4 +1579,18 @@ void unlink_file_vma_batch_init(struct unlink_vma_fi= le_batch *); void unlink_file_vma_batch_add(struct unlink_vma_file_batch *, struct vm_a= rea_struct *); void unlink_file_vma_batch_final(struct unlink_vma_file_batch *); =20 +#ifdef CONFIG_PT_RECLAIM +void try_to_reclaim_pgtables(struct mmu_gather *tlb, struct vm_area_struct= *vma, + unsigned long start_addr, unsigned long end_addr, + struct zap_details *details); +#else +static inline void try_to_reclaim_pgtables(struct mmu_gather *tlb, + struct vm_area_struct *vma, + unsigned long start_addr, + unsigned long end_addr, + struct zap_details *details) +{ +} +#endif /* CONFIG_PT_RECLAIM */ + #endif /* __MM_INTERNAL_H */ diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 7b7c858d5f99..63551077795d 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1578,7 +1578,7 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, uns= igned long addr, if (userfaultfd_armed(vma) && !(vma->vm_flags & VM_SHARED)) pml =3D pmd_lock(mm, pmd); =20 - start_pte =3D pte_offset_map_nolock(mm, pmd, NULL, haddr, &ptl); + start_pte =3D pte_offset_map_nolock(mm, pmd, &pgt_pmd, haddr, &ptl); if (!start_pte) /* mmap_lock + page lock should prevent this */ goto abort; if (!pml) @@ -1586,6 +1586,11 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, un= signed long addr, else if (ptl !=3D pml) spin_lock_nested(ptl, SINGLE_DEPTH_NESTING); =20 + /* pmd entry may be changed by others */ + if (unlikely(IS_ENABLED(CONFIG_PT_RECLAIM) && !pml && + !pmd_same(pgt_pmd, pmdp_get_lockless(pmd)))) + goto abort; + /* step 2: clear page table and adjust rmap */ for (i =3D 0, addr =3D haddr, pte =3D start_pte; i < HPAGE_PMD_NR; i++, addr +=3D PAGE_SIZE, pte++) { @@ -1633,6 +1638,12 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, un= signed long addr, pml =3D pmd_lock(mm, pmd); if (ptl !=3D pml) spin_lock_nested(ptl, SINGLE_DEPTH_NESTING); + + if (unlikely(IS_ENABLED(CONFIG_PT_RECLAIM) && + !pmd_same(pgt_pmd, pmdp_get_lockless(pmd)))) { + spin_unlock(ptl); + goto unlock; + } } pgt_pmd =3D pmdp_collapse_flush(vma, haddr, pmd); pmdp_get_lockless_sync(); @@ -1660,6 +1671,7 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, uns= igned long addr, } if (start_pte) pte_unmap_unlock(start_pte, ptl); +unlock: if (pml && pml !=3D ptl) spin_unlock(pml); if (notified) @@ -1719,6 +1731,14 @@ static void retract_page_tables(struct address_space= *mapping, pgoff_t pgoff) mmu_notifier_invalidate_range_start(&range); =20 pml =3D pmd_lock(mm, pmd); +#ifdef CONFIG_PT_RECLAIM + /* check if the pmd is still valid */ + if (check_pmd_still_valid(mm, addr, pmd) !=3D SCAN_SUCCEED) { + spin_unlock(pml); + mmu_notifier_invalidate_range_end(&range); + continue; + } +#endif ptl =3D pte_lockptr(mm, pmd); if (ptl !=3D pml) spin_lock_nested(ptl, SINGLE_DEPTH_NESTING); diff --git a/mm/memory.c b/mm/memory.c index 09db2c97cc5c..b07d63767d93 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -423,6 +423,7 @@ void pmd_install(struct mm_struct *mm, pmd_t *pmd, unsi= gned long addr, spinlock_t *ptl =3D pmd_lock(mm, pmd); =20 if (likely(pmd_none(*pmd))) { /* Has another populated it ? */ + arch_flush_tlb_before_set_pte_page(mm, addr); mm_inc_nr_ptes(mm); /* * Ensure all pte setup (eg. pte page lock and page clearing) are @@ -1931,6 +1932,7 @@ void zap_page_range_single(struct vm_area_struct *vma= , unsigned long address, * could have been expanded for hugetlb pmd sharing. */ unmap_single_vma(&tlb, vma, address, end, details, false); + try_to_reclaim_pgtables(&tlb, vma, address, end, details); mmu_notifier_invalidate_range_end(&range); tlb_finish_mmu(&tlb); hugetlb_zap_end(vma, details); diff --git a/mm/pt_reclaim.c b/mm/pt_reclaim.c new file mode 100644 index 000000000000..e375e7f2059f --- /dev/null +++ b/mm/pt_reclaim.c @@ -0,0 +1,131 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include +#include +#include + +#include "internal.h" + +/* + * Locking: + * - already held the mmap read lock to traverse the pgtable + * - use pmd lock for clearing pmd entry + * - use pte lock for checking empty PTE page, and release it after clear= ing + * pmd entry, then we can capture the changed pmd in pte_offset_map_loc= k() + * etc after holding this pte lock. Thanks to this, we don't need to ho= ld the + * rmap-related locks. + * - users of pte_offset_map_lock() etc all expect the PTE page to be sta= ble by + * using rcu lock, so PTE pages should be freed by RCU. + */ +static int reclaim_pgtables_pmd_entry(pmd_t *pmd, unsigned long addr, + unsigned long next, struct mm_walk *walk) +{ + struct mm_struct *mm =3D walk->mm; + struct mmu_gather *tlb =3D walk->private; + pte_t *start_pte, *pte; + pmd_t pmdval; + spinlock_t *pml =3D NULL, *ptl; + int i; + + start_pte =3D pte_offset_map_nolock(mm, pmd, &pmdval, addr, &ptl); + if (!start_pte) + return 0; + + pml =3D pmd_lock(mm, pmd); + if (ptl !=3D pml) + spin_lock_nested(ptl, SINGLE_DEPTH_NESTING); + + if (unlikely(!pmd_same(pmdval, pmdp_get_lockless(pmd)))) + goto out_ptl; + + /* Check if it is empty PTE page */ + for (i =3D 0, pte =3D start_pte; i < PTRS_PER_PTE; i++, pte++) { + if (!pte_none(ptep_get(pte))) + goto out_ptl; + } + pte_unmap(start_pte); + + pmd_clear(pmd); + if (ptl !=3D pml) + spin_unlock(ptl); + spin_unlock(pml); + + /* + * NOTE: + * In order to reuse mmu_gather to batch flush tlb and free PTE pages, + * here tlb is not flushed before pmd lock is unlocked. This may + * result in the following two situations: + * + * 1) Userland can trigger page fault and fill a huge page, which will + * cause the existence of small size TLB and huge TLB for the same + * address. + * + * 2) Userland can also trigger page fault and fill a PTE page, which + * will cause the existence of two small size TLBs, but the PTE + * page they map are different. + * + * Some CPUs do not allow these, to solve this, we can define + * arch_flush_tlb_before_set_{huge|pte}_page to detect this case and + * flush TLB before filling a huge page or a PTE page in page fault + * path. + */ + pte_free_tlb(tlb, pmd_pgtable(pmdval), addr); + mm_dec_nr_ptes(mm); + + return 0; + +out_ptl: + pte_unmap_unlock(start_pte, ptl); + if (pml !=3D ptl) + spin_unlock(pml); + + return 0; +} + +static const struct mm_walk_ops reclaim_pgtables_walk_ops =3D { + .pmd_entry =3D reclaim_pgtables_pmd_entry, + .walk_lock =3D PGWALK_RDLOCK, +}; + +void try_to_reclaim_pgtables(struct mmu_gather *tlb, struct vm_area_struct= *vma, + unsigned long start_addr, unsigned long end_addr, + struct zap_details *details) +{ + unsigned long start =3D max(vma->vm_start, start_addr); + unsigned long end; + + if (start >=3D vma->vm_end) + return; + end =3D min(vma->vm_end, end_addr); + if (end <=3D vma->vm_start) + return; + + /* Skip hugetlb case */ + if (is_vm_hugetlb_page(vma)) + return; + + /* Leave this to the THP path to handle */ + if (vma->vm_flags & VM_HUGEPAGE) + return; + + /* userfaultfd_wp case may reinstall the pte entry, also skip */ + if (userfaultfd_wp(vma)) + return; + + /* + * For private file mapping, the COW-ed page is an anon page, and it + * will not be zapped. For simplicity, skip the all writable private + * file mapping cases. + */ + if (details && !vma_is_anonymous(vma) && + !(vma->vm_flags & VM_MAYSHARE) && + (vma->vm_flags & VM_WRITE)) + return; + + start =3D ALIGN(start, PMD_SIZE); + end =3D ALIGN_DOWN(end, PMD_SIZE); + if (end - start < PMD_SIZE) + return; + + walk_page_range_vma(vma, start, end, &reclaim_pgtables_walk_ops, tlb); +} --=20 2.20.1 From nobody Fri Dec 19 16:59:30 2025 Received: from mail-oa1-f53.google.com (mail-oa1-f53.google.com [209.85.160.53]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E2EFA12C814 for ; Mon, 1 Jul 2024 08:48:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.53 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1719823708; cv=none; b=sWeDExI4xyj4RKzzyMjfDeMzJG2fskMKR0RvfMaRHhv4tpDyzYUpg/mLwZEA5Bk+c4Igkw9D84YYKL+gwwxecH9kTIlFrcA5NqI5RQZb5dSqhUe6CyJTKHuIgXF+Gt8rBUcat45FoORy0wN+2pVztwtEoq2xpbluqqY+Szxqs54= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1719823708; c=relaxed/simple; bh=UR+gqYf3Ay+0HsLufDqQaYW+u+QXys5JZWosHZe79ac=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=hkn3krky8Zm3yj1EribYKj7lMEAfClqHLRIsxZPQZ8/0muG5PPPkMolRNcXk1Rvjn0o7f7bN8+yDTbpykUBZ8hGszwIZEXBZ96051dDqZqkvp776ZxrDgket5yyH7M39xFcWtoQewoMxh4rxZZjUmjleHePVmBvLKGSOAW4/OO4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=bdGhfYAk; arc=none smtp.client-ip=209.85.160.53 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="bdGhfYAk" Received: by mail-oa1-f53.google.com with SMTP id 586e51a60fabf-25cd49906aeso382622fac.2 for ; Mon, 01 Jul 2024 01:48:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1719823706; x=1720428506; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=McQK/YxjiDTA99Z+xjg1GpN+1HDVMfpYwsApecFoTmw=; b=bdGhfYAk4I27qLqeWqJiB2TMq3MPbJMkfqC9igkU1nBeVH/96tyadSyFIKQwHRKgZb owjLFh8OkObdL9IZv35UHgDzYayZhe4Sweh5q+tM3dMIpEe1Av2KXhnjNnaDRZ7gkvz4 4DaCl1ir2IV68e3oZYHrtQd2VcNjFGKVDFyFtVFDWzsE3FPRwhpUTjiZ2qRykPxjEnu3 Ogw9i/kT8P33ORL7a4LRTpm4YRr1/WNzQ5Y3Vws9YJDna3ioWI0Pbh1KpNgIfCK70BuI 3TLF/XX58HDqm0wkAaKc9Dey8BKwmzGrgRj8O+Cu7ER4BTQdX16xzwGiliiJ6Ne95CTL ogaQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1719823706; x=1720428506; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=McQK/YxjiDTA99Z+xjg1GpN+1HDVMfpYwsApecFoTmw=; b=GeKOsmBRfah66Plkylas7wT7UNl6LBXiIaRchKv2J1jnOhNIAQdrNCaOtQ69GebPPO QvgYcX29NMjDq4iyVxFJpRRzm0X4byYpIuG54MvuqDoTtH3R/eIA08q80UbsZ1ojGCOs e4Xeq7L4ey0h1u4+AOrV0k0cCgOD/Xs109xVY1wtkB81rk8cnj2Sq+3w9C1vONzmSezM iCR6sb+NQhZQ5BA9I/wXZvymg3Wu9EsJQvhsuBMcTXmMU3W3mqV76agI7ExLdOO41uyV cMLBeRgwR6048egYHURkphSQvfspe2TraSLKPKOcsY8eJy81P6VuA6IoZPyxlgl04uZz oBlA== X-Forwarded-Encrypted: i=1; AJvYcCVdpZ9re49Oq6hVIRTTjVHGINvJeXZF+kzvmW335ER6xUR5u7GHbDsmCug6XDaLJTWBC2+YV/cZH+paZg0kZKWvJxKa9U1MjQnfyRXq X-Gm-Message-State: AOJu0YxEHXNwAsiT0tpflT5yl2HvZ6/aXlJ2zBOhrqOkr9QfkWemd01y 0JtKVrhmu2Al6lyQsMq2iJfMabxS0kApkf4NjWkG36bNkjs4L7TGzBsImPSNBn8= X-Google-Smtp-Source: AGHT+IFN2WppGgIQE1/8To548LzmJeYPiitqZmZGxr2Jxr97Tojwrcl4bY6VD5q2UaSwNJF/59bDBA== X-Received: by 2002:a05:6870:2892:b0:255:1fea:340d with SMTP id 586e51a60fabf-25db3049d93mr5100637fac.0.1719823705877; Mon, 01 Jul 2024 01:48:25 -0700 (PDT) Received: from C02DW0BEMD6R.bytedance.net ([139.177.225.241]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-70804a7e7e0sm5932374b3a.204.2024.07.01.01.48.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 Jul 2024 01:48:25 -0700 (PDT) From: Qi Zheng To: david@redhat.com, hughd@google.com, willy@infradead.org, mgorman@suse.de, muchun.song@linux.dev, akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Qi Zheng Subject: [RFC PATCH 5/7] x86: mm: free page table pages by RCU instead of semi RCU Date: Mon, 1 Jul 2024 16:46:46 +0800 Message-Id: <1a27215790293face83242cfd703e910aa0c5ce8.1719570849.git.zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Now, if CONFIG_MMU_GATHER_RCU_TABLE_FREE is selected, the page table pages will be freed by semi RCU, that is: - batch table freeing: asynchronous free by RCU - single table freeing: IPI + synchronous free In this way, the page table can be lockless traversed by disabling IRQ in paths such as fast GUP. But this is not enough to free the empty PTE page table pages in paths other that munmap and exit_mmap path, because IPI cannot be synchronized with rcu_read_lock() in pte_offset_map{_lock}(). In preparation for supporting empty PTE page table pages reclaimation, let single table also be freed by RCU like batch table freeing. Then we can also use pte_offset_map() etc to prevent PTE page from being freed. Like pte_free_defer(), we can also safely use ptdesc->pt_rcu_head to free the page table pages: - The pt_rcu_head is unioned with pt_list and pmd_huge_pte. - For pt_list, it is used to manage the PGD page in x86. Fortunately tlb_remove_table() will not be used for free PGD pages, so it is safe to use pt_rcu_head. - For pmd_huge_pte, we will do zap_deposited_table() before freeing the PMD page, so it is also safe. Signed-off-by: Qi Zheng --- arch/x86/include/asm/tlb.h | 23 +++++++++++++++++++++++ arch/x86/kernel/paravirt.c | 7 +++++++ arch/x86/mm/pgtable.c | 2 +- mm/mmu_gather.c | 2 +- 4 files changed, 32 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/tlb.h b/arch/x86/include/asm/tlb.h index 580636cdc257..9182db1e0264 100644 --- a/arch/x86/include/asm/tlb.h +++ b/arch/x86/include/asm/tlb.h @@ -34,4 +34,27 @@ static inline void __tlb_remove_table(void *table) free_page_and_swap_cache(table); } =20 +#ifndef CONFIG_PT_RECLAIM +static inline void __tlb_remove_table_one(void *table) +{ + free_page_and_swap_cache(table); +} +#else +static inline void __tlb_remove_table_one_rcu(struct rcu_head *head) +{ + struct page *page; + + page =3D container_of(head, struct page, rcu_head); + free_page_and_swap_cache(page); +} + +static inline void __tlb_remove_table_one(void *table) +{ + struct page *page; + + page =3D table; + call_rcu(&page->rcu_head, __tlb_remove_table_one_rcu); +} +#endif /* CONFIG_PT_RECLAIM */ + #endif /* _ASM_X86_TLB_H */ diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c index 5358d43886ad..199b9a3813b4 100644 --- a/arch/x86/kernel/paravirt.c +++ b/arch/x86/kernel/paravirt.c @@ -60,10 +60,17 @@ void __init native_pv_lock_init(void) static_branch_disable(&virt_spin_lock_key); } =20 +#ifndef CONFIG_PT_RECLAIM static void native_tlb_remove_table(struct mmu_gather *tlb, void *table) { tlb_remove_page(tlb, table); } +#else +static void native_tlb_remove_table(struct mmu_gather *tlb, void *table) +{ + tlb_remove_table(tlb, table); +} +#endif =20 struct static_key paravirt_steal_enabled; struct static_key paravirt_steal_rq_enabled; diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c index 93e54ba91fbf..cd5bf2157611 100644 --- a/arch/x86/mm/pgtable.c +++ b/arch/x86/mm/pgtable.c @@ -18,7 +18,7 @@ EXPORT_SYMBOL(physical_mask); #define PGTABLE_HIGHMEM 0 #endif =20 -#ifndef CONFIG_PARAVIRT +#if !defined(CONFIG_PARAVIRT) && !defined(CONFIG_PT_RECLAIM) static inline void paravirt_tlb_remove_table(struct mmu_gather *tlb, void *table) { diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c index 99b3e9408aa0..1a8f7b8781a2 100644 --- a/mm/mmu_gather.c +++ b/mm/mmu_gather.c @@ -314,7 +314,7 @@ static inline void tlb_table_invalidate(struct mmu_gath= er *tlb) static void tlb_remove_table_one(void *table) { tlb_remove_table_sync_one(); - __tlb_remove_table(table); + __tlb_remove_table_one(table); } =20 static void tlb_table_flush(struct mmu_gather *tlb) --=20 2.20.1 From nobody Fri Dec 19 16:59:30 2025 Received: from mail-oa1-f44.google.com (mail-oa1-f44.google.com [209.85.160.44]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 57DA814B077 for ; Mon, 1 Jul 2024 08:48:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.44 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1719823713; cv=none; b=CMOlSMNTdvFy177d8/WtmtDEmSH0k+i6GjOM0ixufEWe8SbGwY/KQdV13SML83YcOMJvXW3rJVBv82popGyv8mBbg2CMithmm5WOZMakdXKARCMB2KhIe/T5/BYXiMeP2aD5/55ePbFbNLLWc0AyxZ4Y8FY8IaMW9NTK2DCRgWU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1719823713; c=relaxed/simple; bh=I9ZLJI6Hui3oX+ajVbw9U0h4kGM7XOSsdMnotECzQ1o=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=T1VakSR8meXeqBsyg+NTP6vrmp8x1US+ObSdzeG7EpeyD7GuN0bgQGd4BytAuKXnchLWwzIHLEZln9/YtazHi9gm6Lji53GzxvTip22TR0CikzUsX9BGbqMStlhsy0ZkQVMdOqIItxLpO3bQG0Gj60NFUhVJsw7kR3xmO3IISSg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=hb5Y3QwG; arc=none smtp.client-ip=209.85.160.44 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="hb5Y3QwG" Received: by mail-oa1-f44.google.com with SMTP id 586e51a60fabf-25cce249004so441845fac.1 for ; Mon, 01 Jul 2024 01:48:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1719823710; x=1720428510; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=BSdXPp6cua9HpA6rLebdl04hcrKD7F+dglilKwP4NdI=; b=hb5Y3QwGjZ9AmmfZA/oi0sGtobsrTBo46HyxZqJDFnmW1qTi8ZgOfuJ5T+2iyDygkk LIPf5HYdJY48fGrlmpTV5aycpnBC6TGeTqwx+uMESEhvCNi/r82Um/6DuNKteGbX74co fwmMfZjLR2Q9YEZz266Qp4RwHzlJrzU38/QomOKMoNj+CWy7FfVizBN/Z7EICQmmBgLw lxVIhmXBVSpIzYwT3VJHELARKlbReMsgeD+0zjwHQq8NZxhPQTFhFDvb8J7DcDAsXYlm /aZan/dSx9EK1j+1RjGe50b0Eol7FOa+mAlvMg0aH3mmK2KCpbGaXBWeVxD/Hl3J+U3q CJCA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1719823710; x=1720428510; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=BSdXPp6cua9HpA6rLebdl04hcrKD7F+dglilKwP4NdI=; b=C7KHWhYQDfcNtWVqooq2DC6CFjVUWuJ3TNuYjRr2O7bNxoCcS4jSGp0aI0GokTOSpN p45wdbPnWJKRev5XLeeBzkmmEA6tw9HL6Ah0HVAvuRuWk7MemrFrf4IlqRwsVtjdC2Me 1GkhhtybQmo+s6n73eCWy3xY8IsDiBVH/1yOmdJtUqqQX4O3p0RNqf+yv0GcNN0chz2X MaInBhwlOvm+3ixyLBLq+zCrd6bY5X36pRySI4M4o9K0jtucCa6qJdEAt3hhRuW6JMjk AN6K6m9lyizMr4ADz9TSa8F7Vqj2RNzodV4ZW5mYnAH6QZ0LowgWek6O3G531KfZoYwj AxOA== X-Forwarded-Encrypted: i=1; AJvYcCXczFKvHaLPbBqVMOJrzqJW7QIRdh/8IzzPucZUJzZNp0yNkmV1WwLHjirzuizWjZmnILHdaLi0IQck+V41dB3AXFRSxa1orI+aKtzu X-Gm-Message-State: AOJu0YxW5PyJ+KfKGWVbGkEWL5NeLDl/mGTqfjuS2r4iwCh4+qfhDani e6TdqLSJDunhxn3DBY7YLIfORTDrWf3dXOXP4N7c23osyjwmtd/KMTPTm+a5CzY= X-Google-Smtp-Source: AGHT+IFfafy76qxnP3FNZsq5qulnNjrKrfxWqb4/wPoL/DGlK5/mwzGRfDiOgExztR4uBC55G6FnbA== X-Received: by 2002:a05:6871:7806:b0:254:ecbd:1815 with SMTP id 586e51a60fabf-25db3709c91mr5161423fac.5.1719823710530; Mon, 01 Jul 2024 01:48:30 -0700 (PDT) Received: from C02DW0BEMD6R.bytedance.net ([139.177.225.241]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-70804a7e7e0sm5932374b3a.204.2024.07.01.01.48.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 Jul 2024 01:48:30 -0700 (PDT) From: Qi Zheng To: david@redhat.com, hughd@google.com, willy@infradead.org, mgorman@suse.de, muchun.song@linux.dev, akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Qi Zheng Subject: [RFC PATCH 6/7] x86: mm: define arch_flush_tlb_before_set_huge_page Date: Mon, 1 Jul 2024 16:46:47 +0800 Message-Id: X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" When we use mmu_gather to batch flush tlb and free PTE pages, the TLB is not flushed before pmd lock is unlocked. This may result in the following two situations: 1) Userland can trigger page fault and fill a huge page, which will cause the existence of small size TLB and huge TLB for the same address. 2) Userland can also trigger page fault and fill a PTE page, which will cause the existence of two small size TLBs, but the PTE page they map are different. According to Intel's TLB Application note (317080), some CPUs of x86 do not allow the 1) case, so define arch_flush_tlb_before_set_huge_page to detect and fix this issue. Signed-off-by: Qi Zheng --- arch/x86/include/asm/pgtable.h | 6 ++++++ arch/x86/mm/pgtable.c | 13 +++++++++++++ 2 files changed, 19 insertions(+) diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index e39311a89bf4..f93d964ab6a3 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -1668,6 +1668,12 @@ void arch_check_zapped_pte(struct vm_area_struct *vm= a, pte_t pte); #define arch_check_zapped_pmd arch_check_zapped_pmd void arch_check_zapped_pmd(struct vm_area_struct *vma, pmd_t pmd); =20 +#ifdef CONFIG_PT_RECLAIM +#define arch_flush_tlb_before_set_huge_page arch_flush_tlb_before_set_huge= _page +void arch_flush_tlb_before_set_huge_page(struct mm_struct *mm, + unsigned long addr); +#endif + #ifdef CONFIG_XEN_PV #define arch_has_hw_nonleaf_pmd_young arch_has_hw_nonleaf_pmd_young static inline bool arch_has_hw_nonleaf_pmd_young(void) diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c index cd5bf2157611..d037f7425f82 100644 --- a/arch/x86/mm/pgtable.c +++ b/arch/x86/mm/pgtable.c @@ -926,3 +926,16 @@ void arch_check_zapped_pmd(struct vm_area_struct *vma,= pmd_t pmd) VM_WARN_ON_ONCE(!(vma->vm_flags & VM_SHADOW_STACK) && pmd_shstk(pmd)); } + +#ifdef CONFIG_PT_RECLAIM +void arch_flush_tlb_before_set_huge_page(struct mm_struct *mm, + unsigned long addr) +{ + if (atomic_read(&mm->tlb_flush_pending)) { + unsigned long start =3D ALIGN_DOWN(addr, PMD_SIZE); + unsigned long end =3D start + PMD_SIZE; + + flush_tlb_mm_range(mm, start, end, PAGE_SHIFT, false); + } +} +#endif /* CONFIG_PT_RECLAIM */ --=20 2.20.1 From nobody Fri Dec 19 16:59:30 2025 Received: from mail-oa1-f41.google.com (mail-oa1-f41.google.com [209.85.160.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AD13214B95A for ; Mon, 1 Jul 2024 08:48:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.41 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1719823717; cv=none; b=pxXF+M6pj0xnWAUEAMehfXcZvL6unQ9kyCSFGs+t7n2bH3VTjv5KsFqVgxRL1aMJQWPTg/5/RsgXmDyv5ii+mwmZ8QtLZyTKlCAditMqU7FO6pyPTMD0eH7vLSk2eZCowQadz0gEnqUQmh0szodnJw08CaT7IrE6Nwo0FBqBQ48= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1719823717; c=relaxed/simple; bh=fcyiPkvEEKdWecEuJOIrUfCCsRSKdwvcoilRZcmvGGo=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=V64rj+Y6kUd/cXr1cb8KZKUb1DF7s+WJQitoLcWlirPcxReCcPibX/8Pd2ASsEtWcTVM9vrEhWt+oBomaXc3KQwkAsG9XTDDOz4RgzIyNjcdkA14h6eqRB9nz7hUTcIRlniBH9LaH02zd2osGgC+rRcc15qunbZGvpYtpFG5tJ4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=V+qLUZ4G; arc=none smtp.client-ip=209.85.160.41 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="V+qLUZ4G" Received: by mail-oa1-f41.google.com with SMTP id 586e51a60fabf-25cd49906aeso382642fac.2 for ; Mon, 01 Jul 2024 01:48:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1719823715; x=1720428515; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=StMJ2RtFXCKaoVJqaHkGOtRFp2kjcwvK/9x07Wn8pQQ=; b=V+qLUZ4GIIrF45F6iucTWm5GCtC3/JlaRVHJV1mluKhVd3fH9gOzvL5rd4ib8DKX7L ExYdNg9vjeOteHZM4NZwnx4kDVY7LKpHedblkfyAhuqTduhUvhbLQ+aTN1zpDuIRo1Ma As8MZsThRMwALvnMcQ89JK2xiDKkMNkIi47mT0b1hH1ADKKgZWRyyVtLppwan9rL+oCh mVCb65Brq4oVsshm4bk6RWTPwcqPLiR6dOa1QzbRuA1EIolPOiXu/SiezFcZZfAVHXaK 1bQVcYIg55vO0om8sSbNtycD9MtR3IJ+hDQls0zhX/yMdXro4aVMGYmItwB0OHYgtvQf A0GQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1719823715; x=1720428515; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=StMJ2RtFXCKaoVJqaHkGOtRFp2kjcwvK/9x07Wn8pQQ=; b=qaXsqrErUY+lszf7F8BjztIQps29Zq9XvNqcY92uJyHIRySwyrXCFlElKg8ieD7Slq EFKZj8UOSZT5nHpsWigGPnSpammhWxNaiEqiXPDS0ntrS6TAvZ+ynNu7F7oBn/F/rDt6 rNLaAFrIyt9PTn9wpNOEbk8cYuxmD9Vfc+jlhVjDKgSstPFBildxla3VK3eprqBKJeKu mBhI/o4Wr2vg1ONwiAW5m59OLRLrEAAfCi5qewIX2bHtu3d3vMPbKQYshG8DYnHBWP/+ 707aFT+r/sKTYkafqMSrVJJLsReOMsIvg5ZXSIMVRMGkhQWINbYAiNplucAUAOEWUoLe uZjw== X-Forwarded-Encrypted: i=1; AJvYcCUaDMVhTx221eQhQwQiwVjbyf+pAbeKSeVWe0DsNzPD1luemE92fzk2NrBjYP8wfaeWK6H9q39SyDMFr0ARqrLW8YK4YqaRFpuRHjrv X-Gm-Message-State: AOJu0YwGSgALiMZQeXNCqTvfiySipcm4nbjqLTDjyOj40rvjQT2qv2PI amhTUDZ1CLkTaUCZRv9HcUtDKdFL39ROucdSar0uB647B0dVEO9mI2kC5kYshpU= X-Google-Smtp-Source: AGHT+IFE3dKI9Tgl7EIyomU4IdVHY+PmwvfXhSbglMPZKAPvgX4/lADNKj6dPhxAruv+7V80kKXCBw== X-Received: by 2002:a05:6808:1a2a:b0:3d5:65c7:c26c with SMTP id 5614622812f47-3d6b549a9fcmr5659724b6e.4.1719823714723; Mon, 01 Jul 2024 01:48:34 -0700 (PDT) Received: from C02DW0BEMD6R.bytedance.net ([139.177.225.241]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-70804a7e7e0sm5932374b3a.204.2024.07.01.01.48.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 Jul 2024 01:48:34 -0700 (PDT) From: Qi Zheng To: david@redhat.com, hughd@google.com, willy@infradead.org, mgorman@suse.de, muchun.song@linux.dev, akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Qi Zheng Subject: [RFC PATCH 7/7] x86: select ARCH_SUPPORTS_PT_RECLAIM if X86_64 Date: Mon, 1 Jul 2024 16:46:48 +0800 Message-Id: <0f3aacc9707da962398de71c127e7771c6798062.1719570849.git.zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Now, x86 has fully supported the CONFIG_PT_RECLAIM feature, and reclaiming PTE pages is profitable only on 64-bit systems, so select ARCH_SUPPORTS_PT_RECLAIM if X86_64. Signed-off-by: Qi Zheng --- arch/x86/Kconfig | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index cbe5fac4b9dd..23ccd7c30adc 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -313,6 +313,7 @@ config X86 select FUNCTION_ALIGNMENT_4B imply IMA_SECURE_AND_OR_TRUSTED_BOOT if EFI select HAVE_DYNAMIC_FTRACE_NO_PATCHABLE + select ARCH_SUPPORTS_PT_RECLAIM if X86_64 =20 config INSTRUCTION_DECODER def_bool y --=20 2.20.1