arch/x86/mm/fault.c | 3 --- 1 file changed, 3 deletions(-)
The prefetchw dates back decades and the fundamental notion of doing
something like this on a lock is shady.
Moreover, for few years now in the fast path faults are handled with RCU
+ per-vma locking, hopefully not even looking at the lock to begin with.
As such just remove it.
I did not see a point benchmarking this. Given that it is not expected
to be looked at by default justifies not doing the prefetch.
Signed-off-by: Mateusz Guzik <mjguzik@gmail.com>
---
arch/x86/mm/fault.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
index 296d294142c8..697432f63c59 100644
--- a/arch/x86/mm/fault.c
+++ b/arch/x86/mm/fault.c
@@ -13,7 +13,6 @@
#include <linux/mmiotrace.h> /* kmmio_handler, ... */
#include <linux/perf_event.h> /* perf_sw_event */
#include <linux/hugetlb.h> /* hstate_index_to_shift */
-#include <linux/prefetch.h> /* prefetchw */
#include <linux/context_tracking.h> /* exception_enter(), ... */
#include <linux/uaccess.h> /* faulthandler_disabled() */
#include <linux/efi.h> /* efi_crash_gracefully_on_page_fault()*/
@@ -1496,8 +1495,6 @@ DEFINE_IDTENTRY_RAW_ERRORCODE(exc_page_fault)
address = cpu_feature_enabled(X86_FEATURE_FRED) ? fred_event_data(regs) : read_cr2();
- prefetchw(¤t->mm->mmap_lock);
-
/*
* KVM uses #PF vector to deliver 'page not present' events to guests
* (asynchronous page fault mechanism). The event happens when a
--
2.43.0
On 01.04.25 16:35, Mateusz Guzik wrote: > The prefetchw dates back decades and the fundamental notion of doing > something like this on a lock is shady. > > Moreover, for few years now in the fast path faults are handled with RCU > + per-vma locking, hopefully not even looking at the lock to begin with. > > As such just remove it. > > I did not see a point benchmarking this. Given that it is not expected > to be looked at by default justifies not doing the prefetch. > > Signed-off-by: Mateusz Guzik <mjguzik@gmail.com> > --- > arch/x86/mm/fault.c | 3 --- > 1 file changed, 3 deletions(-) > > diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c > index 296d294142c8..697432f63c59 100644 > --- a/arch/x86/mm/fault.c > +++ b/arch/x86/mm/fault.c > @@ -13,7 +13,6 @@ > #include <linux/mmiotrace.h> /* kmmio_handler, ... */ > #include <linux/perf_event.h> /* perf_sw_event */ > #include <linux/hugetlb.h> /* hstate_index_to_shift */ > -#include <linux/prefetch.h> /* prefetchw */ > #include <linux/context_tracking.h> /* exception_enter(), ... */ > #include <linux/uaccess.h> /* faulthandler_disabled() */ > #include <linux/efi.h> /* efi_crash_gracefully_on_page_fault()*/ > @@ -1496,8 +1495,6 @@ DEFINE_IDTENTRY_RAW_ERRORCODE(exc_page_fault) > > address = cpu_feature_enabled(X86_FEATURE_FRED) ? fred_event_data(regs) : read_cr2(); > > - prefetchw(¤t->mm->mmap_lock); > - > /* > * KVM uses #PF vector to deliver 'page not present' events to guests > * (asynchronous page fault mechanism). The event happens when a I'm sure if this would have any value, we'd get notified about it :) Acked-by: David Hildenbrand <david@redhat.com> -- Cheers, David / dhildenb
The following commit has been merged into the x86/mm branch of tip:
Commit-ID: 1701771d3069fbee154ca48e882e227fdcfbb583
Gitweb: https://git.kernel.org/tip/1701771d3069fbee154ca48e882e227fdcfbb583
Author: Mateusz Guzik <mjguzik@gmail.com>
AuthorDate: Tue, 01 Apr 2025 16:35:20 +02:00
Committer: Ingo Molnar <mingo@kernel.org>
CommitterDate: Tue, 01 Apr 2025 22:48:56 +02:00
x86/mm: Stop prefetching current->mm->mmap_lock on page faults
The prefetchw() dates back decades and the fundamental notion of doing
something like this on a lock is shady.
Moreover, for a few years now in the fast path faults are handled with RCU
+ per-vma locking, hopefully not even looking at the lock to begin with.
As such just remove it.
I did not see a point benchmarking this. Given that it is not expected
to be looked at by default justifies not doing the prefetch.
Signed-off-by: Mateusz Guzik <mjguzik@gmail.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@surriel.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: https://lore.kernel.org/r/20250401143520.1113572-1-mjguzik@gmail.com
---
arch/x86/mm/fault.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
index 296d294..697432f 100644
--- a/arch/x86/mm/fault.c
+++ b/arch/x86/mm/fault.c
@@ -13,7 +13,6 @@
#include <linux/mmiotrace.h> /* kmmio_handler, ... */
#include <linux/perf_event.h> /* perf_sw_event */
#include <linux/hugetlb.h> /* hstate_index_to_shift */
-#include <linux/prefetch.h> /* prefetchw */
#include <linux/context_tracking.h> /* exception_enter(), ... */
#include <linux/uaccess.h> /* faulthandler_disabled() */
#include <linux/efi.h> /* efi_crash_gracefully_on_page_fault()*/
@@ -1496,8 +1495,6 @@ DEFINE_IDTENTRY_RAW_ERRORCODE(exc_page_fault)
address = cpu_feature_enabled(X86_FEATURE_FRED) ? fred_event_data(regs) : read_cr2();
- prefetchw(¤t->mm->mmap_lock);
-
/*
* KVM uses #PF vector to deliver 'page not present' events to guests
* (asynchronous page fault mechanism). The event happens when a
The following commit has been merged into the x86/mm branch of tip:
Commit-ID: 0b2695d58e800ad53e718d003310829db492a39c
Gitweb: https://git.kernel.org/tip/0b2695d58e800ad53e718d003310829db492a39c
Author: Mateusz Guzik <mjguzik@gmail.com>
AuthorDate: Tue, 01 Apr 2025 16:35:20 +02:00
Committer: Ingo Molnar <mingo@kernel.org>
CommitterDate: Tue, 01 Apr 2025 22:47:02 +02:00
x86/mm: Stop prefetching current->mm->mmap_lock on page faults
The prefetchw() dates back decades and the fundamental notion of doing
something like this on a lock is shady.
Moreover, for a few years now in the fast path faults are handled with RCU
+ per-vma locking, hopefully not even looking at the lock to begin with.
As such just remove it.
I did not see a point benchmarking this. Given that it is not expected
to be looked at by default justifies not doing the prefetch.
Signed-off-by: Mateusz Guzik <mjguzik@gmail.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@surriel.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: https://lore.kernel.org/r/20250401143520.1113572-1-mjguzik@gmail.com
---
arch/x86/mm/fault.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
index 296d294..697432f 100644
--- a/arch/x86/mm/fault.c
+++ b/arch/x86/mm/fault.c
@@ -13,7 +13,6 @@
#include <linux/mmiotrace.h> /* kmmio_handler, ... */
#include <linux/perf_event.h> /* perf_sw_event */
#include <linux/hugetlb.h> /* hstate_index_to_shift */
-#include <linux/prefetch.h> /* prefetchw */
#include <linux/context_tracking.h> /* exception_enter(), ... */
#include <linux/uaccess.h> /* faulthandler_disabled() */
#include <linux/efi.h> /* efi_crash_gracefully_on_page_fault()*/
@@ -1496,8 +1495,6 @@ DEFINE_IDTENTRY_RAW_ERRORCODE(exc_page_fault)
address = cpu_feature_enabled(X86_FEATURE_FRED) ? fred_event_data(regs) : read_cr2();
- prefetchw(¤t->mm->mmap_lock);
-
/*
* KVM uses #PF vector to deliver 'page not present' events to guests
* (asynchronous page fault mechanism). The event happens when a
The following commit has been merged into the x86/mm branch of tip:
Commit-ID: 0f0a9bf602449c0114117a72eab4329c9a22176d
Gitweb: https://git.kernel.org/tip/0f0a9bf602449c0114117a72eab4329c9a22176d
Author: Mateusz Guzik <mjguzik@gmail.com>
AuthorDate: Tue, 01 Apr 2025 16:35:20 +02:00
Committer: Ingo Molnar <mingo@kernel.org>
CommitterDate: Tue, 01 Apr 2025 20:26:35 +02:00
x86/mm: Stop prefetching current->mm->mmap_lock on page faults
The prefetchw() dates back decades and the fundamental notion of doing
something like this on a lock is shady.
Moreover, for a few years now in the fast path faults are handled with RCU
+ per-vma locking, hopefully not even looking at the lock to begin with.
As such just remove it.
I did not see a point benchmarking this. Given that it is not expected
to be looked at by default justifies not doing the prefetch.
Signed-off-by: Mateusz Guzik <mjguzik@gmail.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@surriel.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: https://lore.kernel.org/r/20250401143520.1113572-1-mjguzik@gmail.com
---
arch/x86/mm/fault.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
index 296d294..697432f 100644
--- a/arch/x86/mm/fault.c
+++ b/arch/x86/mm/fault.c
@@ -13,7 +13,6 @@
#include <linux/mmiotrace.h> /* kmmio_handler, ... */
#include <linux/perf_event.h> /* perf_sw_event */
#include <linux/hugetlb.h> /* hstate_index_to_shift */
-#include <linux/prefetch.h> /* prefetchw */
#include <linux/context_tracking.h> /* exception_enter(), ... */
#include <linux/uaccess.h> /* faulthandler_disabled() */
#include <linux/efi.h> /* efi_crash_gracefully_on_page_fault()*/
@@ -1496,8 +1495,6 @@ DEFINE_IDTENTRY_RAW_ERRORCODE(exc_page_fault)
address = cpu_feature_enabled(X86_FEATURE_FRED) ? fred_event_data(regs) : read_cr2();
- prefetchw(¤t->mm->mmap_lock);
-
/*
* KVM uses #PF vector to deliver 'page not present' events to guests
* (asynchronous page fault mechanism). The event happens when a
© 2016 - 2026 Red Hat, Inc.