From nobody Sun Dec 14 21:22:16 2025 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id ADAE51DE2A9; Wed, 11 Dec 2024 16:03:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733932989; cv=none; b=HKv40QMU0ad63ZlazZMy2Y/SHZOelC+Fpm90nSv5VYY18Bg+MjgxDGplwo0VcZAaqjp9k1JfFQe0Zn5ixsXQ0jkxTVQmE+Jz8RaDEGvbtqTOaaqF4WHPvC7RCH8q7FmuACWTfMoD0yXAAJPFmPWcRBXGfRsX70NHskMe/qbGCM4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733932989; c=relaxed/simple; bh=Hgpm0rcPetVs44CBUg381VyIeoJc1wVTprblymycJVU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=nLvA2wC4VZuNggXA9Cbp4vDFFbzz8fDx7hKMXL3sIOAmgSHF6X2Ke3h+k4hxk6e4CIe+zPe2wDANxw6PnZLOrNbQ9e2AZzcIq2Fvn0s4iiI42i/SJn7DDpYGB9uM7V+Sr/dzS1f5Q6Y19KpQuryS76i6kChPgw1sbgY9aaZZCek= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 64CCA15A1; Wed, 11 Dec 2024 08:03:35 -0800 (PST) Received: from mazurka.cambridge.arm.com (mazurka.cambridge.arm.com [10.1.196.66]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 42A103F5A1; Wed, 11 Dec 2024 08:03:05 -0800 (PST) From: =?UTF-8?q?Miko=C5=82aj=20Lenczewski?= To: ryan.roberts@arm.com, catalin.marinas@arm.com, will@kernel.org, corbet@lwn.net, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com Cc: =?UTF-8?q?Miko=C5=82aj=20Lenczewski?= , linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev Subject: [RESEND RFC PATCH v1 4/5] arm64/mm: Delay tlbi in contpte_convert() under BBML2 Date: Wed, 11 Dec 2024 16:01:40 +0000 Message-ID: <20241211160218.41404-5-miko.lenczewski@arm.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20241211160218.41404-1-miko.lenczewski@arm.com> References: <20241211160218.41404-1-miko.lenczewski@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable When converting a region via contpte_convert() to use mTHP, we have two different goals. We have to mark each entry as contiguous, and we would like to smear the dirty and young (access) bits across all entries in the contiguous block. Currently, we do this by first accumulating the dirty and young bits in the block, using an atomic __ptep_get_and_clear() and the relevant pte_{dirty,young}() calls, performing a tlbi, and finally smearing the correct bits across the block using __set_ptes(). This approach works fine for BBM level 0, but with support for BBM level 2 we are allowed to reorder the tlbi to after setting the pagetable entries. This reordering means that other threads will not see an invalid pagetable entry, instead operating on stale data, until we have performed our smearing and issued the invalidation. Avoiding this invalid entry reduces faults in other threads, and thus improves performance marginally (more so when there are more threads). Signed-off-by: Miko=C5=82aj Lenczewski --- arch/arm64/mm/contpte.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/arch/arm64/mm/contpte.c b/arch/arm64/mm/contpte.c index 55107d27d3f8..fc927be800ee 100644 --- a/arch/arm64/mm/contpte.c +++ b/arch/arm64/mm/contpte.c @@ -68,9 +68,13 @@ static void contpte_convert(struct mm_struct *mm, unsign= ed long addr, pte =3D pte_mkyoung(pte); } =20 - __flush_tlb_range(&vma, start_addr, addr, PAGE_SIZE, true, 3); + if (!system_supports_bbml2()) + __flush_tlb_range(&vma, start_addr, addr, PAGE_SIZE, true, 3); =20 __set_ptes(mm, start_addr, start_ptep, pte, CONT_PTES); + + if (system_supports_bbml2()) + __flush_tlb_range(&vma, start_addr, addr, PAGE_SIZE, true, 3); } =20 void __contpte_try_fold(struct mm_struct *mm, unsigned long addr, --=20 2.45.2