From nobody Thu Feb 12 21:41:29 2026 Received: from mail-wm1-f46.google.com (mail-wm1-f46.google.com [209.85.128.46]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 80DED192B8C for ; Fri, 3 Jan 2025 18:39:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.46 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735929598; cv=none; b=kX0ttk3Ay4cIve8iLugrKJL5RRcFlXi2ZULIFnpC/z9HhoFdXp0HktoriaP7rEkOIQwk8qn0bK+EGi1NUpEJ/y1aO+Xk/OSFHsdLDS3PYQ7hhbzaBlhTuyW4Io/u9rimdmVSvFx+a+8Jr4OtXy+WKd8AuO2dYhez6LKKB76kbFI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1735929598; c=relaxed/simple; bh=97CIyszQag73oA70YGJTbMbEDm/CPO+p2kR3J9o83lo=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:To:Cc; b=sNhI+F5PdCZLQL/16r2S1DBUZKhGkntwvSV2bMqUrjxPliAedvF113Bx+V8jMvztKbKF6PhhrbL294N3ktpx1D6rrVeqI6snDfv9hHmzltAVs8Sf6g1X9MfreZS4WtR8Y+LIdUZI70NA4eeZ6HOPiZjlXXXEGVifF++dhxIIzzA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=QiAEGKNS; arc=none smtp.client-ip=209.85.128.46 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="QiAEGKNS" Received: by mail-wm1-f46.google.com with SMTP id 5b1f17b1804b1-4361d5730caso1065e9.1 for ; Fri, 03 Jan 2025 10:39:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1735929595; x=1736534395; darn=vger.kernel.org; h=cc:to:message-id:content-transfer-encoding:mime-version:subject :date:from:from:to:cc:subject:date:message-id:reply-to; bh=ZTB+rEpkKE5yruA5DUop/88GQtExvYxyFAxV7mbTMv4=; b=QiAEGKNSk3gVzEmXGqPeR57LAl7X36kQ8TegbpWkBQfk9RCKr7i3dxOO7Oegwg+i5K aVzXP4H24yoeUaDRehmLdRwRkUM/7LCsWOonOj7axljlnXr4lJPbYGSBKwnCPQLqYEDL 9E75SkZrC/PnO06u+JjMmIInhs3jda+YA8UCqya8BENYeOrbvvDZSNQMRSTvS0DhL7aQ mMmLS13nbjBZzv8AoK6q4rhNRRf3d8+dWuNI1JSChemq/zm7SiMPkv6MXae1aPbihg8r TlTSOGvoIsR+/ogoABzM809MQojDBR8SVny3bHAahhdiE0by50kX1+SdPRLJv3bbvVf4 KfqA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1735929595; x=1736534395; h=cc:to:message-id:content-transfer-encoding:mime-version:subject :date:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=ZTB+rEpkKE5yruA5DUop/88GQtExvYxyFAxV7mbTMv4=; b=BgG4qiB0kOgkUCwW9vcQfBPTP3DI87ctkV0FBrLHo/L/aW5LWA6WvOVF8Vz3+eV7+H N+KWZkN5iSc9qpXBF1hcdG5/FUk+8xJRAvAAgpBBv45RD+AG/GJkezwoays2kwYhndrH rJvp+clxEPku4B5oj1PgdOYqubG3x0qZM+FhtDHCh00FwNXHpcgRyaDZObA1U0Ikf/tM zzeZy6O1tN6/qiVYjO1vyVSDGBfIl6KaBUd8f9R4C0OnY+7h6IB5qVL05h+yjWi4pIdP u9ThUnr39j4gltD44Ig5RgNGrqBp5NN6Dfz2ff6jdNGZuwpAPyRIy8hPWy7b2G2WlLZD KzjA== X-Forwarded-Encrypted: i=1; AJvYcCXF8eC/bt5F6lrsFi1k6ISRmnibESQ1SwfnIxf75Mc/dvLxbNik95FnFJq6KlPveXTWqDGVFf41X7+Ojk4=@vger.kernel.org X-Gm-Message-State: AOJu0YzsJGcjWNCDJMJrvGcO6UbSoCN+Zt8HllI8kfkF0iW9NDKoxVzM VadAiUfJq+iztc3/d7pmrn3Rf9HJ/5tb9xm71lqIhEsbSriCEoesXrSPmxOJRw== X-Gm-Gg: ASbGncuZA9FHjBUqueM2IixnnM9z6wfV/jr3+YFdLnAUrKfHzgtBBwTQlgl61Rpv5mH DxS51dvPSMu5TWJ4ogybsmNxfMubn3UXUbMVoCP01tv39ldUr3BgNitQp0LmARopSXQ5O6HxWRk 8jra12uvGMdEQFZtckUj7o18NKKfH3En7YwETPBMSkQI5+XKRAMSC89oCb2GC8EP8ijbqop16Ck 20z5gcbX7QBEv73fgwMaVF4C1PlT9b7H3Y45Jjihtm5aA== X-Google-Smtp-Source: AGHT+IFD+qdlBvT4m+bB+dQfGAXCL9/oC0y2KEyf0p0e6iw0viHGlpNCT35HEwo5ybEDchapT4NykQ== X-Received: by 2002:a05:600c:2e51:b0:435:921b:3535 with SMTP id 5b1f17b1804b1-436ccb38391mr1048875e9.3.1735929594584; Fri, 03 Jan 2025 10:39:54 -0800 (PST) Received: from localhost ([2a00:79e0:9d:4:7c21:a713:369e:c925]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-43656b3b207sm522376935e9.32.2025.01.03.10.39.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 03 Jan 2025 10:39:53 -0800 (PST) From: Jann Horn Date: Fri, 03 Jan 2025 19:39:38 +0100 Subject: [PATCH] x86/mm: Fix flush_tlb_range() when used for zapping normal PMDs Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250103-x86-collapse-flush-fix-v1-1-3c521856cfa6@google.com> X-B4-Tracking: v=1; b=H4sIAOoueGcC/x2MQQqAIBAAvxJ7bsFKS/pKdBBba0EqXAoh+nvSc QZmHhBKTAJj9UCim4WPvUBTV+A3t6+EvBSGVrVGNarDbHv0R4zuFMIQL9kwcMbg7OC8XoztNJT 4TFT0P57m9/0AWvZML2gAAAA= X-Change-ID: 20250103-x86-collapse-flush-fix-fa87ac4d5834 To: Dave Hansen , Andy Lutomirski , Peter Zijlstra Cc: Rik van Riel , linux-kernel@vger.kernel.org, stable@vger.kernel.org, Jann Horn X-Mailer: b4 0.15-dev X-Developer-Signature: v=1; a=ed25519-sha256; t=1735929590; l=2120; i=jannh@google.com; s=20240730; h=from:subject:message-id; bh=97CIyszQag73oA70YGJTbMbEDm/CPO+p2kR3J9o83lo=; b=AsMblhvWSFbPVWf7u4wvkWWLNo3Z1+4wo6Te885+A1ZZOVCluzXDn+PfRA4rPi6jxVEh+Fq2e s2Odi8xML5OA20hDEPetOXyvHVa06FGQAwufMnDaBdAiwWDOUsSlXPB X-Developer-Key: i=jannh@google.com; a=ed25519; pk=AljNtGOzXeF6khBXDJVVvwSEkVDGnnZZYqfWhP1V+C8= On the following path, flush_tlb_range() can be used for zapping normal PMD entries (PMD entries that point to page tables) together with the PTE entries in the pointed-to page table: collapse_pte_mapped_thp pmdp_collapse_flush flush_tlb_range The arm64 version of flush_tlb_range() has a comment describing that it can be used for page table removal, and does not use any last-level invalidation optimizations. Fix the X86 version by making it behave the same way. Currently, X86 only uses this information for the following two purposes, which I think means the issue doesn't have much impact: - In native_flush_tlb_multi() for checking if lazy TLB CPUs need to be IPI'd to avoid issues with speculative page table walks. - In Hyper-V TLB paravirtualization, again for lazy TLB stuff. The patch "x86/mm: only invalidate final translations with INVLPGB" which is currently under review (see ) would probably be making the impact of this a lot worse. Cc: stable@vger.kernel.org Fixes: 016c4d92cd16 ("x86/mm/tlb: Add freed_tables argument to flush_tlb_mm= _range") Signed-off-by: Jann Horn --- arch/x86/include/asm/tlbflush.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflus= h.h index 02fc2aa06e9e0ecdba3fe948cafe5892b72e86c0..3da645139748538daac70166618= d8ad95116eb74 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -242,7 +242,7 @@ void flush_tlb_multi(const struct cpumask *cpumask, flush_tlb_mm_range((vma)->vm_mm, start, end, \ ((vma)->vm_flags & VM_HUGETLB) \ ? huge_page_shift(hstate_vma(vma)) \ - : PAGE_SHIFT, false) + : PAGE_SHIFT, true) =20 extern void flush_tlb_all(void); extern void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start, --- base-commit: aa135d1d0902c49ed45bec98c61c1b4022652b7e change-id: 20250103-x86-collapse-flush-fix-fa87ac4d5834 --=20 Jann Horn