From nobody Mon Dec 1 22:36:48 2025 Received: from mail-pj1-f53.google.com (mail-pj1-f53.google.com [209.85.216.53]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C7C59221D9E for ; Thu, 27 Nov 2025 14:12:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.53 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764252748; cv=none; b=UXNitXANvoOxNz8UBAqTSs2TaL3FprF/pv1fCvUVx50zRbICID4oQ9V1j9F8JhWrgSvTL4p5Br/+S2j49m0t2OnRgpIgAOXVjJ750h8Bo/04ECIVtNwB7S4LFZuW8fcHThjnyZDi8aOw8jZ2NBwFlcR8mVQk5IaE9uBHuRBsJz8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764252748; c=relaxed/simple; bh=7/P0WM8LP0Pzq4LXZ2Bwjb/hu4Jof6j4zAmCpiWAKvk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=XzuDMTDEHE8JP5EK5XUzflGggH71MJSa6deVCakMneie0/8Gb04mTtZIvuHbQCalCblZGTTDp17tBR/zHZSusXvX6v2g6oUHLnxEKMJaI0KhYzid0IKcUNmRR9vfCilQb8jj5pYk2nLxYXzAj0Px+Lhahax70AFKla4UWA1uxsg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=BsB5hVGx; arc=none smtp.client-ip=209.85.216.53 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="BsB5hVGx" Received: by mail-pj1-f53.google.com with SMTP id 98e67ed59e1d1-3438231df5fso1142464a91.2 for ; Thu, 27 Nov 2025 06:12:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1764252746; x=1764857546; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=kCE7BFaxTmXT5gX1mJT6mv3jSbzO4IVr04XlUFHKTqA=; b=BsB5hVGxOFNIw63W43oZL2uGQcxWTrjoy74YVWf0Y7bSLlhRPqrSqcR/E4ggtaHxw9 hgxBeDQ6kjYv9osax1PqJIF3HVkoychdodb2NXolc2RVL1Lh807yeijls2jZvDDTX+9a S/fJnZqhADf0IliFcKrQaPmys5QL3cEkWZJFweoOjib2gOhsn6SALfdrnxOPnzkDT187 a/5/1BAfHQGDeCecae6Xuf58nmiZmzKwXi/XP1dgp2KDVsUJjEVjqYaM/MxVwfqIElQr PTnxoqZIodXTKVZaJqG3VFLQn/w1QwaEB2SZzg+IIKkqc44SuE7S9hwmWf1cucTaPZNA GRpQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1764252746; x=1764857546; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=kCE7BFaxTmXT5gX1mJT6mv3jSbzO4IVr04XlUFHKTqA=; b=UhxjiVNYDfx75CEzh7s7Fd3ZUCNYfc9h+FW6Y9F4b2KL+PkpciN3nX/XGZA2dlvxCU h5aTwSQiYLLXd3eFdllE4lsKM4UDDj2BStw/w3C6srsS9Z8eQLuIhY7FMAIzEV0ay0BO 2soBFfquwJ+7uuDw9tAteuEw8Yo/sGdzKtZOVNPc0TBfea6Q8y/nAqgKn7X09LisLYM2 DtP3wJN0PkUM1lYLjTunD/t3dEpQI/kqc3DW8Pehm1RDNxz9AqNGrYqm0RZpBlrl2yDf LOXg6U+xuJvR12K25TmdWX61zME7ZhRRF393KHbAiJZD0X3BaLVYcq6JIqEmvJuZb+wv GVqQ== X-Forwarded-Encrypted: i=1; AJvYcCUp2SP+Yf7ioCjOVKt8dUE0kw+scoUOKjsylVsJAAPZUGoo+URlvKf90LBlALJmaIqNkePl3Ia+9d2XRpY=@vger.kernel.org X-Gm-Message-State: AOJu0YzJVu66LJ9NDmVID49YV2HXEBfQaiNDk6RM47vAHNfXSixJhTyl rUQUCTbcnAezjaON/AE8ovWTKfbk8XJ96YbeYYokQM08VsZ7A6B9k9Q7jadNHblQZAk= X-Gm-Gg: ASbGncscfIDraQsGuAV4mb6wg3SQv2J6Kb3VrGRRrh4my+kjJefQZoxPAdFehsnNyy2 nkK2aUZf3aeoEoA+XGdEyjBzIuoFNm3wj1tRi1BOSy4fRXdbvLR5kRq47+JlHX1GQvIG7V+iJwg l2YZAlO57oVrTG+B/Q58nMxpKr0YMvrK1MY3XEAVUcVC6wHeMWiALHN87cVZVQ1VcQVlZmtM445 6/tJ3suLqtjesJ3/+kjRBceN7YZrSrPGTfvoVNZ7JxB2WrpSXRlQ9p+UJDYHHY/4ONkN1/k/Iu6 yKLdiF746cw/Aoidzm+M7eLT1nGoJam1HUcrRHEfI3dUyvMgRGgmfR48Cn3XqDPNGHs3XCQWJ1B GTwpykNLEmHiFcfw2N/SA1zo9k9Ousf34TwFQ4vbsFp53Z/0A3FEFItH65WKsp3S0r1/hmP8qfl IbWIu8qXo6YojDYsges+O+cWsyZOf3KTGqKfnV8rNdZKC5WJxN48MNZQoxWHs868wcJ46yW7UYX ZqcaaRdUmcj X-Google-Smtp-Source: AGHT+IHaJbhR6uoA8AsgyvN1rZkRokZKcV5EN92G/Fd+krItmdJONN0XD0mHkXfOugo46//aXXPl7A== X-Received: by 2002:a17:90b:1a84:b0:32b:65e6:ec48 with SMTP id 98e67ed59e1d1-3475ebd2f41mr10628835a91.8.1764252745954; Thu, 27 Nov 2025 06:12:25 -0800 (PST) Received: from J9GPGXL7NT.bytedance.net ([61.213.176.58]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-3477b7341d2sm2030249a91.11.2025.11.27.06.12.15 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Thu, 27 Nov 2025 06:12:25 -0800 (PST) From: Xu Lu To: pjw@kernel.org, palmer@dabbelt.com, aou@eecs.berkeley.edu, alex@ghiti.fr, kees@kernel.org, mingo@redhat.com, peterz@infradead.org, juri.lelli@redhat.com, vincent.guittot@linaro.org, akpm@linux-foundation.org, david@redhat.com, apatel@ventanamicro.com, guoren@kernel.org Cc: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Xu Lu Subject: [RFC PATCH v2 5/9] riscv: mm: Introduce arch_do_shoot_lazy_tlb Date: Thu, 27 Nov 2025 22:11:13 +0800 Message-ID: <20251127141117.87420-6-luxu.kernel@bytedance.com> X-Mailer: git-send-email 2.50.1 In-Reply-To: <20251127141117.87420-1-luxu.kernel@bytedance.com> References: <20251127141117.87420-1-luxu.kernel@bytedance.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" When an active_mm is shot down, we switch it to the init_mm, evict it out of percpu active mm array. Signed-off-by: Xu Lu --- arch/riscv/include/asm/mmu_context.h | 5 ++++ arch/riscv/include/asm/tlbflush.h | 11 +++++++++ arch/riscv/mm/context.c | 19 ++++++++++++++++ arch/riscv/mm/tlbflush.c | 34 ++++++++++++++++++++++++---- 4 files changed, 64 insertions(+), 5 deletions(-) diff --git a/arch/riscv/include/asm/mmu_context.h b/arch/riscv/include/asm/= mmu_context.h index 8c4bc49a3a0f5..bc73cc3262ae6 100644 --- a/arch/riscv/include/asm/mmu_context.h +++ b/arch/riscv/include/asm/mmu_context.h @@ -16,6 +16,11 @@ void switch_mm(struct mm_struct *prev, struct mm_struct *next, struct task_struct *task); =20 +#ifdef CONFIG_RISCV_LAZY_TLB_FLUSH +#define arch_do_shoot_lazy_tlb arch_do_shoot_lazy_tlb +void arch_do_shoot_lazy_tlb(void *arg); +#endif + #define activate_mm activate_mm static inline void activate_mm(struct mm_struct *prev, struct mm_struct *next) diff --git a/arch/riscv/include/asm/tlbflush.h b/arch/riscv/include/asm/tlb= flush.h index 3f83fd5ef36db..e7365a53265a6 100644 --- a/arch/riscv/include/asm/tlbflush.h +++ b/arch/riscv/include/asm/tlbflush.h @@ -15,6 +15,11 @@ #define FLUSH_TLB_NO_ASID ((unsigned long)-1) =20 #ifdef CONFIG_MMU +static inline unsigned long get_mm_asid(struct mm_struct *mm) +{ + return mm ? cntx2asid(atomic_long_read(&mm->context.id)) : FLUSH_TLB_NO_A= SID; +} + static inline void local_flush_tlb_all(void) { __asm__ __volatile__ ("sfence.vma" : : : "memory"); @@ -86,11 +91,17 @@ struct tlb_info { DECLARE_PER_CPU_SHARED_ALIGNED(struct tlb_info, tlbinfo); =20 void local_load_tlb_mm(struct mm_struct *mm); +void local_flush_tlb_mm(struct mm_struct *mm); =20 #else /* CONFIG_RISCV_LAZY_TLB_FLUSH */ =20 static inline void local_load_tlb_mm(struct mm_struct *mm) {} =20 +static inline void local_flush_tlb_mm(struct mm_struct *mm) +{ + local_flush_tlb_all_asid(get_mm_asid(mm)); +} + #endif /* CONFIG_RISCV_LAZY_TLB_FLUSH */ =20 #else /* CONFIG_MMU */ diff --git a/arch/riscv/mm/context.c b/arch/riscv/mm/context.c index a7cf36ad34678..3335080e5f720 100644 --- a/arch/riscv/mm/context.c +++ b/arch/riscv/mm/context.c @@ -274,6 +274,25 @@ static int __init asids_init(void) return 0; } early_initcall(asids_init); + +#ifdef CONFIG_RISCV_LAZY_TLB_FLUSH +void arch_do_shoot_lazy_tlb(void *arg) +{ + struct mm_struct *mm =3D arg; + + if (current->active_mm =3D=3D mm) { + WARN_ON_ONCE(current->mm); + current->active_mm =3D &init_mm; + switch_mm(mm, &init_mm, current); + } + + if (!static_branch_unlikely(&use_asid_allocator) || !mm) + return; + + local_flush_tlb_mm(mm); +} +#endif /* CONFIG_RISCV_LAZY_TLB_FLUSH */ + #else static inline void set_mm(struct mm_struct *prev, struct mm_struct *next, unsigned int cpu) diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c index 4b2ce06cbe6bd..a47bacf5801ab 100644 --- a/arch/riscv/mm/tlbflush.c +++ b/arch/riscv/mm/tlbflush.c @@ -164,11 +164,6 @@ static void __ipi_flush_tlb_range_asid(void *info) local_flush_tlb_range_asid(d->start, d->size, d->stride, d->asid); } =20 -static inline unsigned long get_mm_asid(struct mm_struct *mm) -{ - return mm ? cntx2asid(atomic_long_read(&mm->context.id)) : FLUSH_TLB_NO_A= SID; -} - static void __flush_tlb_range(struct mm_struct *mm, const struct cpumask *cmask, unsigned long start, unsigned long size, @@ -352,4 +347,33 @@ void local_load_tlb_mm(struct mm_struct *mm) } } =20 +void local_flush_tlb_mm(struct mm_struct *mm) +{ + struct tlb_info *info =3D this_cpu_ptr(&tlbinfo); + struct tlb_context *contexts =3D info->contexts; + unsigned long asid =3D get_mm_asid(mm); + unsigned int i; + + if (!mm || mm =3D=3D info->active_mm) { + local_flush_tlb_all_asid(asid); + return; + } + + for (i =3D 0; i < MAX_LOADED_MM; i++) { + if (contexts[i].mm !=3D mm) + continue; + + write_lock(&info->rwlock); + contexts[i].mm =3D NULL; + contexts[i].gen =3D 0; + write_unlock(&info->rwlock); + + cpumask_clear_cpu(raw_smp_processor_id(), mm_cpumask(mm)); + mmdrop_lazy_mm(mm); + break; + } + + local_flush_tlb_all_asid(asid); +} + #endif /* CONFIG_RISCV_LAZY_TLB_FLUSH */ --=20 2.20.1