From nobody Tue Dec 16 07:19:43 2025 Received: from mail-pl1-f175.google.com (mail-pl1-f175.google.com [209.85.214.175]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0065A1CDFA9 for ; Wed, 15 Jan 2025 03:38:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.175 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736912328; cv=none; b=gYqR6bUiqhXESSK0z+N6cjFmaViitb7y1ryAw4OoGH7QU/ijWiTXUVrgCgreHIED1VawnYws5IXZLk4726z4g6CISoqfYRxhnMLBFl5qBx3AgwfCSOe+noKcjnwGRy+GeIhphqJo7JBTQvbHcwG1GXijMsG+7Lh3/vFZ55354tg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736912328; c=relaxed/simple; bh=1knXnsS5fPIg5pYYMPZle5vasLHEA/E6Pte06NithUE=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=AXSM3i7XHBrGd1u6mfhdJdMjVxJkxFTFWYsdTPz236dJuOMyCEfGar5zOoPYA/v+OGzQTkhkKFjqVl1rqEYiOjCdFfrz2rrUHSNJeVpV3TA940BM4FnhZ0hIZ5bk2zClkBKhOc3t5SwZtOv8WZWJ/pHqyl09xpD6uSks6aeEwxk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=lPhZeWVi; arc=none smtp.client-ip=209.85.214.175 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="lPhZeWVi" Received: by mail-pl1-f175.google.com with SMTP id d9443c01a7336-21634338cfdso105138575ad.2 for ; Tue, 14 Jan 2025 19:38:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1736912325; x=1737517125; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=gFMoOwSJpv8P/wz3qgwCAFHIUmnEPnRJywSyGSBHca8=; b=lPhZeWViyJu8dAqZBfowWs1n55cLiYKR93/8kJ090pS8cUbFq/pGrYdANVpYi4uIp5 WdpW48rHFnLnlCeE2GtSVPsxdX4AhyX0yTVsr7riuv07+WBfDfrSCF0Jq/FudkjXgW0p Sf5pBJyE0HnVoLvlWN+KqY84IMhP/piJietzE8+tzsfzwsvgJoK0I6wuVgBC4zoUxr0e wR0bkBN06jtFNNo8c0BInkpdach1rextqcAVTOwdUND73b7BEzmlXX2vil6LcDifxRLO EU4Sksq1AO7ErGApMU/Inz+8Fnw8edWZSuOxsde7LDhRipfiOkml2I/VLZNnJ48zdw/1 gzNw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736912325; x=1737517125; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=gFMoOwSJpv8P/wz3qgwCAFHIUmnEPnRJywSyGSBHca8=; b=wxIX2z5mPmNM2E0Y3hGkjwnPEnz523ZfgwQmbRpRfm6azLLKdqaucRMbcOpSZiufqG BJAMvDb2mhpp8A8XHy0j0R9HjApAqEVWEQ+PN8+icSkXQ8QMhpoVTpaLIuNUqDwRaC+m cvj7qev8mdkyBJwAW/cEBXgaI8xaJ8XHhSwgQ3H1kYUapLnwYIJhmbuxxZM1CNvFLWcz hZ+qtZtS22C7MWLXh91etwFKmWUogrrcdzlpo4AquVBadKc8d3CjOrHc34aHkpqPN1fU XnsrwO1nCUyfBRKeu21Ts4ZsLNi5ddB6siLWbD5ttqJZXsLb5/U3op2ZMS/sGKUIlPYq /RPw== X-Forwarded-Encrypted: i=1; AJvYcCX0G7xM3DY97sANMty9Yb0kE9JeIp05WZYD+Oh9Y4OavpCvCT3DTecflJqjylPPMl3YVR08g9C7/f7P8N4=@vger.kernel.org X-Gm-Message-State: AOJu0YzFxxX8co0y098eId2/EGtQLrrFYWZ40duwr+/mqjMghVke4oHD X+BXx3buQut0uODL/Rai0yPuIzThvh/879DVD/ysx54rD95kOXd7 X-Gm-Gg: ASbGncsX9trkAeh+jiTDZvlsM+JWDWpczwtG57ain0XtiT+VDtDdOQ/yyf0jboRxYSO 6HEmJYC9KNs6ULTuNsPKQU/r8g2eTxHcsOPIjNxNVyysXchGtxPPW1nhUFc1WJdnIwSP9V4phZv o1ykIfHwQtgnMrBgxsVCnYAIvES4QikRzUgbF5RdaIim/8VAK8MZd2Tp4smJtWPYb2ptaRaSDZd zMdmKxF7mkMLWEtOsIP+RGCPo6WkvYAEhlsxHXsyHONacwT3SwhNsG8TLSGiudp9Mu+XloqeXW6 uHQQ5vHr X-Google-Smtp-Source: AGHT+IFF9bEbjjmNapGhmql+BuPntstAKEicpEzxJg3Qh6LkZHMTOzHOOAv9vQg9vg8A6/X699gIig== X-Received: by 2002:a17:902:c941:b0:215:a179:14ca with SMTP id d9443c01a7336-21a83f3eec9mr416527735ad.2.1736912325125; Tue, 14 Jan 2025 19:38:45 -0800 (PST) Received: from Barrys-MBP.hub ([2407:7000:af65:8200:e5d5:b870:ca9b:78f8]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-21a9f10dffbsm73368195ad.49.2025.01.14.19.38.34 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Tue, 14 Jan 2025 19:38:44 -0800 (PST) From: Barry Song <21cnbao@gmail.com> To: akpm@linux-foundation.org, linux-mm@kvack.org Cc: 21cnbao@gmail.com, baolin.wang@linux.alibaba.com, chrisl@kernel.org, david@redhat.com, ioworker0@gmail.com, kasong@tencent.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, lorenzo.stoakes@oracle.com, ryan.roberts@arm.com, v-songbaohua@oppo.com, x86@kernel.org, ying.huang@intel.com, zhengtangquan@oppo.com, Catalin Marinas , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Anshuman Khandual , Shaoqin Huang , Gavin Shan , Kefeng Wang , Mark Rutland , "Kirill A. Shutemov" , Yosry Ahmed , Paul Walmsley , Palmer Dabbelt , Albert Ou , Yicong Yang , Will Deacon Subject: [PATCH v3 2/4] mm: Support tlbbatch flush for a range of PTEs Date: Wed, 15 Jan 2025 16:38:06 +1300 Message-Id: <20250115033808.40641-3-21cnbao@gmail.com> X-Mailer: git-send-email 2.39.3 (Apple Git-146) In-Reply-To: <20250115033808.40641-1-21cnbao@gmail.com> References: <20250115033808.40641-1-21cnbao@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Barry Song This patch lays the groundwork for supporting batch PTE unmapping in try_to_unmap_one(). It introduces range handling for TLB batch flushing, with the range currently set to the size of PAGE_SIZE. The function __flush_tlb_range_nosync() is architecture-specific and is only used within arch/arm64. This function requires the mm structure instead of the vma structure. To allow its reuse by arch_tlbbatch_add_pending(), which operates with mm but not vma, this patch modifies the argument of __flush_tlb_range_nosync() to take mm as its parameter. Cc: Catalin Marinas Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: Dave Hansen Cc: "H. Peter Anvin" Cc: Anshuman Khandual Cc: Ryan Roberts Cc: Shaoqin Huang Cc: Gavin Shan Cc: Kefeng Wang Cc: Mark Rutland Cc: David Hildenbrand Cc: Lance Yang Cc: "Kirill A. Shutemov" Cc: Yosry Ahmed Cc: Paul Walmsley Cc: Palmer Dabbelt Cc: Albert Ou Cc: Yicong Yang Signed-off-by: Barry Song Acked-by: Will Deacon Reviewed-by: Kefeng Wang --- arch/arm64/include/asm/tlbflush.h | 25 +++++++++++++------------ arch/arm64/mm/contpte.c | 2 +- arch/riscv/include/asm/tlbflush.h | 5 +++-- arch/riscv/mm/tlbflush.c | 5 +++-- arch/x86/include/asm/tlbflush.h | 5 +++-- mm/rmap.c | 12 +++++++----- 6 files changed, 30 insertions(+), 24 deletions(-) diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlb= flush.h index bc94e036a26b..98fbc8df7cf3 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -322,13 +322,6 @@ static inline bool arch_tlbbatch_should_defer(struct m= m_struct *mm) return true; } =20 -static inline void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_ba= tch *batch, - struct mm_struct *mm, - unsigned long uaddr) -{ - __flush_tlb_page_nosync(mm, uaddr); -} - /* * If mprotect/munmap/etc occurs during TLB batched flushing, we need to * synchronise all the TLBI issued with a DSB to avoid the race mentioned = in @@ -448,7 +441,7 @@ static inline bool __flush_tlb_range_limit_excess(unsig= ned long start, return false; } =20 -static inline void __flush_tlb_range_nosync(struct vm_area_struct *vma, +static inline void __flush_tlb_range_nosync(struct mm_struct *mm, unsigned long start, unsigned long end, unsigned long stride, bool last_level, int tlb_level) @@ -460,12 +453,12 @@ static inline void __flush_tlb_range_nosync(struct vm= _area_struct *vma, pages =3D (end - start) >> PAGE_SHIFT; =20 if (__flush_tlb_range_limit_excess(start, end, pages, stride)) { - flush_tlb_mm(vma->vm_mm); + flush_tlb_mm(mm); return; } =20 dsb(ishst); - asid =3D ASID(vma->vm_mm); + asid =3D ASID(mm); =20 if (last_level) __flush_tlb_range_op(vale1is, start, pages, stride, asid, @@ -474,7 +467,7 @@ static inline void __flush_tlb_range_nosync(struct vm_a= rea_struct *vma, __flush_tlb_range_op(vae1is, start, pages, stride, asid, tlb_level, true, lpa2_is_enabled()); =20 - mmu_notifier_arch_invalidate_secondary_tlbs(vma->vm_mm, start, end); + mmu_notifier_arch_invalidate_secondary_tlbs(mm, start, end); } =20 static inline void __flush_tlb_range(struct vm_area_struct *vma, @@ -482,7 +475,7 @@ static inline void __flush_tlb_range(struct vm_area_str= uct *vma, unsigned long stride, bool last_level, int tlb_level) { - __flush_tlb_range_nosync(vma, start, end, stride, + __flush_tlb_range_nosync(vma->vm_mm, start, end, stride, last_level, tlb_level); dsb(ish); } @@ -533,6 +526,14 @@ static inline void __flush_tlb_kernel_pgtable(unsigned= long kaddr) dsb(ish); isb(); } + +static inline void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_ba= tch *batch, + struct mm_struct *mm, + unsigned long start, + unsigned long end) +{ + __flush_tlb_range_nosync(mm, start, end, PAGE_SIZE, true, 3); +} #endif =20 #endif diff --git a/arch/arm64/mm/contpte.c b/arch/arm64/mm/contpte.c index 55107d27d3f8..bcac4f55f9c1 100644 --- a/arch/arm64/mm/contpte.c +++ b/arch/arm64/mm/contpte.c @@ -335,7 +335,7 @@ int contpte_ptep_clear_flush_young(struct vm_area_struc= t *vma, * eliding the trailing DSB applies here. */ addr =3D ALIGN_DOWN(addr, CONT_PTE_SIZE); - __flush_tlb_range_nosync(vma, addr, addr + CONT_PTE_SIZE, + __flush_tlb_range_nosync(vma->vm_mm, addr, addr + CONT_PTE_SIZE, PAGE_SIZE, true, 3); } =20 diff --git a/arch/riscv/include/asm/tlbflush.h b/arch/riscv/include/asm/tlb= flush.h index 72e559934952..e4c533691a7d 100644 --- a/arch/riscv/include/asm/tlbflush.h +++ b/arch/riscv/include/asm/tlbflush.h @@ -60,8 +60,9 @@ void flush_pmd_tlb_range(struct vm_area_struct *vma, unsi= gned long start, =20 bool arch_tlbbatch_should_defer(struct mm_struct *mm); void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *batch, - struct mm_struct *mm, - unsigned long uaddr); + struct mm_struct *mm, + unsigned long start, + unsigned long end); void arch_flush_tlb_batched_pending(struct mm_struct *mm); void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch); =20 diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c index 9b6e86ce3867..6d6e8e7cc576 100644 --- a/arch/riscv/mm/tlbflush.c +++ b/arch/riscv/mm/tlbflush.c @@ -186,8 +186,9 @@ bool arch_tlbbatch_should_defer(struct mm_struct *mm) } =20 void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *batch, - struct mm_struct *mm, - unsigned long uaddr) + struct mm_struct *mm, + unsigned long start, + unsigned long end) { cpumask_or(&batch->cpumask, &batch->cpumask, mm_cpumask(mm)); } diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflus= h.h index 69e79fff41b8..2b511972d008 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -278,8 +278,9 @@ static inline u64 inc_mm_tlb_gen(struct mm_struct *mm) } =20 static inline void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_ba= tch *batch, - struct mm_struct *mm, - unsigned long uaddr) + struct mm_struct *mm, + unsigned long start, + unsigned long end) { inc_mm_tlb_gen(mm); cpumask_or(&batch->cpumask, &batch->cpumask, mm_cpumask(mm)); diff --git a/mm/rmap.c b/mm/rmap.c index de6b8c34e98c..abeb9fcec384 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -672,7 +672,8 @@ void try_to_unmap_flush_dirty(void) (TLB_FLUSH_BATCH_PENDING_MASK / 2) =20 static void set_tlb_ubc_flush_pending(struct mm_struct *mm, pte_t pteval, - unsigned long uaddr) + unsigned long start, + unsigned long end) { struct tlbflush_unmap_batch *tlb_ubc =3D ¤t->tlb_ubc; int batch; @@ -681,7 +682,7 @@ static void set_tlb_ubc_flush_pending(struct mm_struct = *mm, pte_t pteval, if (!pte_accessible(mm, pteval)) return; =20 - arch_tlbbatch_add_pending(&tlb_ubc->arch, mm, uaddr); + arch_tlbbatch_add_pending(&tlb_ubc->arch, mm, start, end); tlb_ubc->flush_required =3D true; =20 /* @@ -757,7 +758,8 @@ void flush_tlb_batched_pending(struct mm_struct *mm) } #else static void set_tlb_ubc_flush_pending(struct mm_struct *mm, pte_t pteval, - unsigned long uaddr) + unsigned long start, + unsigned long end) { } =20 @@ -1792,7 +1794,7 @@ static bool try_to_unmap_one(struct folio *folio, str= uct vm_area_struct *vma, */ pteval =3D ptep_get_and_clear(mm, address, pvmw.pte); =20 - set_tlb_ubc_flush_pending(mm, pteval, address); + set_tlb_ubc_flush_pending(mm, pteval, address, address + PAGE_SIZE); } else { pteval =3D ptep_clear_flush(vma, address, pvmw.pte); } @@ -2164,7 +2166,7 @@ static bool try_to_migrate_one(struct folio *folio, s= truct vm_area_struct *vma, */ pteval =3D ptep_get_and_clear(mm, address, pvmw.pte); =20 - set_tlb_ubc_flush_pending(mm, pteval, address); + set_tlb_ubc_flush_pending(mm, pteval, address, address + PAGE_SIZE); } else { pteval =3D ptep_clear_flush(vma, address, pvmw.pte); } --=20 2.39.3 (Apple Git-146)