Compiler CSE and SSA GVN optimizations can cause the address dependency
of addresses returned by rcu_dereference to be lost when comparing those
pointers with either constants or previously loaded pointers.
Introduce ptr_eq() to compare two addresses while preserving the address
dependencies for later use of the address. It should be used when
comparing an address returned by rcu_dereference().
This is needed to prevent the compiler CSE and SSA GVN optimizations
from using @a (or @b) in places where the source refers to @b (or @a)
based on the fact that after the comparison, the two are known to be
equal, which does not preserve address dependencies and allows the
following misordering speculations:
- If @b is a constant, the compiler can issue the loads which depend
on @a before loading @a.
- If @b is a register populated by a prior load, weakly-ordered
CPUs can speculate loads which depend on @a before loading @a.
The same logic applies with @a and @b swapped.
Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Suggested-by: Boqun Feng <boqun.feng@gmail.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Reviewed-by: Boqun Feng <boqun.feng@gmail.com>
Reviewed-by: Joel Fernandes (Google) <joel@joelfernandes.org>
Tested-by: Joel Fernandes (Google) <joel@joelfernandes.org>
Acked-by: "Paul E. McKenney" <paulmck@kernel.org>
Acked-by: Alan Stern <stern@rowland.harvard.edu>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: "Paul E. McKenney" <paulmck@kernel.org>
Cc: Will Deacon <will@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Alan Stern <stern@rowland.harvard.edu>
Cc: John Stultz <jstultz@google.com>
Cc: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Frederic Weisbecker <frederic@kernel.org>
Cc: Joel Fernandes <joel@joelfernandes.org>
Cc: Josh Triplett <josh@joshtriplett.org>
Cc: Uladzislau Rezki <urezki@gmail.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Lai Jiangshan <jiangshanlai@gmail.com>
Cc: Zqiang <qiang.zhang1211@gmail.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Waiman Long <longman@redhat.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: maged.michael@gmail.com
Cc: Mateusz Guzik <mjguzik@gmail.com>
Cc: Gary Guo <gary@garyguo.net>
Cc: Jonas Oberhauser <jonas.oberhauser@huaweicloud.com>
Cc: rcu@vger.kernel.org
Cc: linux-mm@kvack.org
Cc: lkmm@lists.linux.dev
Cc: Nikita Popov <github@npopov.com>
Cc: llvm@lists.linux.dev
---
Changes since v0:
- Include feedback from Alan Stern.
---
include/linux/compiler.h | 63 ++++++++++++++++++++++++++++++++++++++++
1 file changed, 63 insertions(+)
diff --git a/include/linux/compiler.h b/include/linux/compiler.h
index 5b45ea7dff3e..c5ca3b54c112 100644
--- a/include/linux/compiler.h
+++ b/include/linux/compiler.h
@@ -163,6 +163,69 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val,
__asm__ ("" : "=r" (var) : "0" (var))
#endif
+/*
+ * Compare two addresses while preserving the address dependencies for
+ * later use of the address. It should be used when comparing an address
+ * returned by rcu_dereference().
+ *
+ * This is needed to prevent the compiler CSE and SSA GVN optimizations
+ * from using @a (or @b) in places where the source refers to @b (or @a)
+ * based on the fact that after the comparison, the two are known to be
+ * equal, which does not preserve address dependencies and allows the
+ * following misordering speculations:
+ *
+ * - If @b is a constant, the compiler can issue the loads which depend
+ * on @a before loading @a.
+ * - If @b is a register populated by a prior load, weakly-ordered
+ * CPUs can speculate loads which depend on @a before loading @a.
+ *
+ * The same logic applies with @a and @b swapped.
+ *
+ * Return value: true if pointers are equal, false otherwise.
+ *
+ * The compiler barrier() is ineffective at fixing this issue. It does
+ * not prevent the compiler CSE from losing the address dependency:
+ *
+ * int fct_2_volatile_barriers(void)
+ * {
+ * int *a, *b;
+ *
+ * do {
+ * a = READ_ONCE(p);
+ * asm volatile ("" : : : "memory");
+ * b = READ_ONCE(p);
+ * } while (a != b);
+ * asm volatile ("" : : : "memory"); <-- barrier()
+ * return *b;
+ * }
+ *
+ * With gcc 14.2 (arm64):
+ *
+ * fct_2_volatile_barriers:
+ * adrp x0, .LANCHOR0
+ * add x0, x0, :lo12:.LANCHOR0
+ * .L2:
+ * ldr x1, [x0] <-- x1 populated by first load.
+ * ldr x2, [x0]
+ * cmp x1, x2
+ * bne .L2
+ * ldr w0, [x1] <-- x1 is used for access which should depend on b.
+ * ret
+ *
+ * On weakly-ordered architectures, this lets CPU speculation use the
+ * result from the first load to speculate "ldr w0, [x1]" before
+ * "ldr x2, [x0]".
+ * Based on the RCU documentation, the control dependency does not
+ * prevent the CPU from speculating loads.
+ */
+static __always_inline
+int ptr_eq(const volatile void *a, const volatile void *b)
+{
+ OPTIMIZER_HIDE_VAR(a);
+ OPTIMIZER_HIDE_VAR(b);
+ return a == b;
+}
+
#define __UNIQUE_ID(prefix) __PASTE(__PASTE(__UNIQUE_ID_, prefix), __COUNTER__)
/**
--
2.39.5
On Wed, 17 Dec 2025 20:45:28 -0500
Mathieu Desnoyers <mathieu.desnoyers@efficios.com> wrote:
> Compiler CSE and SSA GVN optimizations can cause the address dependency
> of addresses returned by rcu_dereference to be lost when comparing those
> pointers with either constants or previously loaded pointers.
>
> Introduce ptr_eq() to compare two addresses while preserving the address
> dependencies for later use of the address. It should be used when
> comparing an address returned by rcu_dereference().
>
> This is needed to prevent the compiler CSE and SSA GVN optimizations
> from using @a (or @b) in places where the source refers to @b (or @a)
> based on the fact that after the comparison, the two are known to be
> equal, which does not preserve address dependencies and allows the
> following misordering speculations:
>
> - If @b is a constant, the compiler can issue the loads which depend
> on @a before loading @a.
> - If @b is a register populated by a prior load, weakly-ordered
> CPUs can speculate loads which depend on @a before loading @a.
>
> The same logic applies with @a and @b swapped.
>
> Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
> Suggested-by: Boqun Feng <boqun.feng@gmail.com>
> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
> Reviewed-by: Boqun Feng <boqun.feng@gmail.com>
> Reviewed-by: Joel Fernandes (Google) <joel@joelfernandes.org>
> Tested-by: Joel Fernandes (Google) <joel@joelfernandes.org>
> Acked-by: "Paul E. McKenney" <paulmck@kernel.org>
> Acked-by: Alan Stern <stern@rowland.harvard.edu>
> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> Cc: "Paul E. McKenney" <paulmck@kernel.org>
> Cc: Will Deacon <will@kernel.org>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Boqun Feng <boqun.feng@gmail.com>
> Cc: Alan Stern <stern@rowland.harvard.edu>
> Cc: John Stultz <jstultz@google.com>
> Cc: Neeraj Upadhyay <Neeraj.Upadhyay@amd.com>
> Cc: Linus Torvalds <torvalds@linux-foundation.org>
> Cc: Boqun Feng <boqun.feng@gmail.com>
> Cc: Frederic Weisbecker <frederic@kernel.org>
> Cc: Joel Fernandes <joel@joelfernandes.org>
> Cc: Josh Triplett <josh@joshtriplett.org>
> Cc: Uladzislau Rezki <urezki@gmail.com>
> Cc: Steven Rostedt <rostedt@goodmis.org>
> Cc: Lai Jiangshan <jiangshanlai@gmail.com>
> Cc: Zqiang <qiang.zhang1211@gmail.com>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Waiman Long <longman@redhat.com>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Cc: maged.michael@gmail.com
> Cc: Mateusz Guzik <mjguzik@gmail.com>
> Cc: Gary Guo <gary@garyguo.net>
> Cc: Jonas Oberhauser <jonas.oberhauser@huaweicloud.com>
> Cc: rcu@vger.kernel.org
> Cc: linux-mm@kvack.org
> Cc: lkmm@lists.linux.dev
> Cc: Nikita Popov <github@npopov.com>
> Cc: llvm@lists.linux.dev
> ---
> Changes since v0:
> - Include feedback from Alan Stern.
> ---
> include/linux/compiler.h | 63 ++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 63 insertions(+)
>
> diff --git a/include/linux/compiler.h b/include/linux/compiler.h
> index 5b45ea7dff3e..c5ca3b54c112 100644
> --- a/include/linux/compiler.h
> +++ b/include/linux/compiler.h
> @@ -163,6 +163,69 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val,
> __asm__ ("" : "=r" (var) : "0" (var))
> #endif
>
> +/*
> + * Compare two addresses while preserving the address dependencies for
> + * later use of the address. It should be used when comparing an address
> + * returned by rcu_dereference().
> + *
> + * This is needed to prevent the compiler CSE and SSA GVN optimizations
> + * from using @a (or @b) in places where the source refers to @b (or @a)
> + * based on the fact that after the comparison, the two are known to be
> + * equal, which does not preserve address dependencies and allows the
> + * following misordering speculations:
> + *
> + * - If @b is a constant, the compiler can issue the loads which depend
> + * on @a before loading @a.
> + * - If @b is a register populated by a prior load, weakly-ordered
> + * CPUs can speculate loads which depend on @a before loading @a.
> + *
> + * The same logic applies with @a and @b swapped.
> + *
> + * Return value: true if pointers are equal, false otherwise.
> + *
> + * The compiler barrier() is ineffective at fixing this issue. It does
> + * not prevent the compiler CSE from losing the address dependency:
> + *
> + * int fct_2_volatile_barriers(void)
> + * {
> + * int *a, *b;
> + *
> + * do {
> + * a = READ_ONCE(p);
> + * asm volatile ("" : : : "memory");
> + * b = READ_ONCE(p);
> + * } while (a != b);
> + * asm volatile ("" : : : "memory"); <-- barrier()
> + * return *b;
> + * }
> + *
> + * With gcc 14.2 (arm64):
> + *
> + * fct_2_volatile_barriers:
> + * adrp x0, .LANCHOR0
> + * add x0, x0, :lo12:.LANCHOR0
> + * .L2:
> + * ldr x1, [x0] <-- x1 populated by first load.
> + * ldr x2, [x0]
> + * cmp x1, x2
> + * bne .L2
> + * ldr w0, [x1] <-- x1 is used for access which should depend on b.
> + * ret
> + *
> + * On weakly-ordered architectures, this lets CPU speculation use the
> + * result from the first load to speculate "ldr w0, [x1]" before
> + * "ldr x2, [x0]".
> + * Based on the RCU documentation, the control dependency does not
> + * prevent the CPU from speculating loads.
I'm not sure that example (of something that doesn't work) is really necessary.
The simple example of, given:
return a == b ? *a : 0;
the generated code might speculatively dereference 'b' (not a) before returning
zero when the pointers are different.
David
> + */
> +static __always_inline
> +int ptr_eq(const volatile void *a, const volatile void *b)
> +{
> + OPTIMIZER_HIDE_VAR(a);
> + OPTIMIZER_HIDE_VAR(b);
> + return a == b;
> +}
> +
> #define __UNIQUE_ID(prefix) __PASTE(__PASTE(__UNIQUE_ID_, prefix), __COUNTER__)
>
> /**
On Thu, 18 Dec 2025 09:03:13 +0000
David Laight <david.laight.linux@gmail.com> wrote:
> On Wed, 17 Dec 2025 20:45:28 -0500
> Mathieu Desnoyers <mathieu.desnoyers@efficios.com> wrote:
>
> > diff --git a/include/linux/compiler.h b/include/linux/compiler.h
> > index 5b45ea7dff3e..c5ca3b54c112 100644
> > --- a/include/linux/compiler.h
> > +++ b/include/linux/compiler.h
> > @@ -163,6 +163,69 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val,
> > __asm__ ("" : "=r" (var) : "0" (var))
> > #endif
> >
> > +/*
> > + * Compare two addresses while preserving the address dependencies for
> > + * later use of the address. It should be used when comparing an address
> > + * returned by rcu_dereference().
> > + *
> > + * This is needed to prevent the compiler CSE and SSA GVN optimizations
> > + * from using @a (or @b) in places where the source refers to @b (or @a)
> > + * based on the fact that after the comparison, the two are known to be
> > + * equal, which does not preserve address dependencies and allows the
> > + * following misordering speculations:
> > + *
> > + * - If @b is a constant, the compiler can issue the loads which depend
> > + * on @a before loading @a.
> > + * - If @b is a register populated by a prior load, weakly-ordered
> > + * CPUs can speculate loads which depend on @a before loading @a.
> > + *
> > + * The same logic applies with @a and @b swapped.
> > + *
> > + * Return value: true if pointers are equal, false otherwise.
> > + *
> > + * The compiler barrier() is ineffective at fixing this issue. It does
> > + * not prevent the compiler CSE from losing the address dependency:
> > + *
> > + * int fct_2_volatile_barriers(void)
> > + * {
> > + * int *a, *b;
> > + *
> > + * do {
> > + * a = READ_ONCE(p);
> > + * asm volatile ("" : : : "memory");
> > + * b = READ_ONCE(p);
> > + * } while (a != b);
> > + * asm volatile ("" : : : "memory"); <-- barrier()
> > + * return *b;
> > + * }
> > + *
> > + * With gcc 14.2 (arm64):
> > + *
> > + * fct_2_volatile_barriers:
> > + * adrp x0, .LANCHOR0
> > + * add x0, x0, :lo12:.LANCHOR0
> > + * .L2:
> > + * ldr x1, [x0] <-- x1 populated by first load.
> > + * ldr x2, [x0]
> > + * cmp x1, x2
> > + * bne .L2
> > + * ldr w0, [x1] <-- x1 is used for access which should depend on b.
> > + * ret
> > + *
> > + * On weakly-ordered architectures, this lets CPU speculation use the
> > + * result from the first load to speculate "ldr w0, [x1]" before
> > + * "ldr x2, [x0]".
> > + * Based on the RCU documentation, the control dependency does not
> > + * prevent the CPU from speculating loads.
>
> I'm not sure that example (of something that doesn't work) is really necessary.
> The simple example of, given:
> return a == b ? *a : 0;
> the generated code might speculatively dereference 'b' (not a) before returning
> zero when the pointers are different.
I'm not sure I understand what you're saying.
`b` cannot be speculatively dereferenced by the compiler in code-path
where pointers are different, as the compiler cannot ascertain that it is
valid.
The speculative execution on the processor side *does not* matter here as
it needs to honour address dependency (unless you're Alpha, which is why we
add a `mb()` in each `READ_ONCE`).
Best,
Gary
On Thu, 18 Dec 2025 14:27:36 +0000
Gary Guo <gary@garyguo.net> wrote:
> On Thu, 18 Dec 2025 09:03:13 +0000
> David Laight <david.laight.linux@gmail.com> wrote:
>
> > On Wed, 17 Dec 2025 20:45:28 -0500
> > Mathieu Desnoyers <mathieu.desnoyers@efficios.com> wrote:
> >
> > > diff --git a/include/linux/compiler.h b/include/linux/compiler.h
> > > index 5b45ea7dff3e..c5ca3b54c112 100644
> > > --- a/include/linux/compiler.h
> > > +++ b/include/linux/compiler.h
> > > @@ -163,6 +163,69 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val,
> > > __asm__ ("" : "=r" (var) : "0" (var))
> > > #endif
> > >
> > > +/*
> > > + * Compare two addresses while preserving the address dependencies for
> > > + * later use of the address. It should be used when comparing an address
> > > + * returned by rcu_dereference().
> > > + *
> > > + * This is needed to prevent the compiler CSE and SSA GVN optimizations
> > > + * from using @a (or @b) in places where the source refers to @b (or @a)
> > > + * based on the fact that after the comparison, the two are known to be
> > > + * equal, which does not preserve address dependencies and allows the
> > > + * following misordering speculations:
> > > + *
> > > + * - If @b is a constant, the compiler can issue the loads which depend
> > > + * on @a before loading @a.
> > > + * - If @b is a register populated by a prior load, weakly-ordered
> > > + * CPUs can speculate loads which depend on @a before loading @a.
> > > + *
> > > + * The same logic applies with @a and @b swapped.
> > > + *
> > > + * Return value: true if pointers are equal, false otherwise.
> > > + *
> > > + * The compiler barrier() is ineffective at fixing this issue. It does
> > > + * not prevent the compiler CSE from losing the address dependency:
> > > + *
> > > + * int fct_2_volatile_barriers(void)
> > > + * {
> > > + * int *a, *b;
> > > + *
> > > + * do {
> > > + * a = READ_ONCE(p);
> > > + * asm volatile ("" : : : "memory");
> > > + * b = READ_ONCE(p);
> > > + * } while (a != b);
> > > + * asm volatile ("" : : : "memory"); <-- barrier()
> > > + * return *b;
> > > + * }
> > > + *
> > > + * With gcc 14.2 (arm64):
> > > + *
> > > + * fct_2_volatile_barriers:
> > > + * adrp x0, .LANCHOR0
> > > + * add x0, x0, :lo12:.LANCHOR0
> > > + * .L2:
> > > + * ldr x1, [x0] <-- x1 populated by first load.
> > > + * ldr x2, [x0]
> > > + * cmp x1, x2
> > > + * bne .L2
> > > + * ldr w0, [x1] <-- x1 is used for access which should depend on b.
> > > + * ret
> > > + *
> > > + * On weakly-ordered architectures, this lets CPU speculation use the
> > > + * result from the first load to speculate "ldr w0, [x1]" before
> > > + * "ldr x2, [x0]".
> > > + * Based on the RCU documentation, the control dependency does not
> > > + * prevent the CPU from speculating loads.
> >
> > I'm not sure that example (of something that doesn't work) is really necessary.
> > The simple example of, given:
> > return a == b ? *a : 0;
> > the generated code might speculatively dereference 'b' (not a) before returning
> > zero when the pointers are different.
>
> I'm not sure I understand what you're saying.
>
> `b` cannot be speculatively dereferenced by the compiler in code-path
> where pointers are different, as the compiler cannot ascertain that it is
> valid.
The 'validity' doesn't matter for speculative execution.
> The speculative execution on the processor side *does not* matter here as
> it needs to honour address dependency (unless you're Alpha, which is why we
> add a `mb()` in each `READ_ONCE`).
There isn't an 'address dependency', that is the problem.
The issue is that 'a == b ? *a : 0' and 'a == b ? *b : 0' always evaluate
to the same value and the compiler will (effectively) substitute one for the
other.
But sometimes you really do care which pointer is speculatively dereferenced
when the they are different.
Memory barriers can only enforce the order of the reads of 'a', 'b' and '*a',
they won't change whether the generated code contains '*a' or '*b'.
David
>
> Best,
> Gary
>
>
>
On 2025-12-18 04:03, David Laight wrote:
[...]
>> + *
>> + * The compiler barrier() is ineffective at fixing this issue. It does
>> + * not prevent the compiler CSE from losing the address dependency:
>> + *
>> + * int fct_2_volatile_barriers(void)
>> + * {
>> + * int *a, *b;
>> + *
>> + * do {
>> + * a = READ_ONCE(p);
>> + * asm volatile ("" : : : "memory");
>> + * b = READ_ONCE(p);
>> + * } while (a != b);
>> + * asm volatile ("" : : : "memory"); <-- barrier()
>> + * return *b;
>> + * }
>> + *
>> + * With gcc 14.2 (arm64):
>> + *
>> + * fct_2_volatile_barriers:
>> + * adrp x0, .LANCHOR0
>> + * add x0, x0, :lo12:.LANCHOR0
>> + * .L2:
>> + * ldr x1, [x0] <-- x1 populated by first load.
>> + * ldr x2, [x0]
>> + * cmp x1, x2
>> + * bne .L2
>> + * ldr w0, [x1] <-- x1 is used for access which should depend on b.
>> + * ret
>> + *
>> + * On weakly-ordered architectures, this lets CPU speculation use the
>> + * result from the first load to speculate "ldr w0, [x1]" before
>> + * "ldr x2, [x0]".
>> + * Based on the RCU documentation, the control dependency does not
>> + * prevent the CPU from speculating loads.
>
> I'm not sure that example (of something that doesn't work) is really necessary.
> The simple example of, given:
> return a == b ? *a : 0;
> the generated code might speculatively dereference 'b' (not a) before returning
> zero when the pointers are different.
In the past discussion that led to this new API, AFAIU, Linus made it
clear that this counter example needs to be in a comment:
https://lore.kernel.org/lkml/CAHk-=wgBgh5U+dyNaN=+XCdcm2OmgSRbcH4Vbtk8i5ZDGwStSA@mail.gmail.com/
This counter-example is what convinced him that this addresses a real
issue.
Thanks,
Mathieu
--
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com
On Thu, 18 Dec 2025 08:51:02 -0500
Mathieu Desnoyers <mathieu.desnoyers@efficios.com> wrote:
> On 2025-12-18 04:03, David Laight wrote:
> [...]
> >> + *
> >> + * The compiler barrier() is ineffective at fixing this issue. It does
> >> + * not prevent the compiler CSE from losing the address dependency:
> >> + *
> >> + * int fct_2_volatile_barriers(void)
> >> + * {
> >> + * int *a, *b;
> >> + *
> >> + * do {
> >> + * a = READ_ONCE(p);
> >> + * asm volatile ("" : : : "memory");
> >> + * b = READ_ONCE(p);
> >> + * } while (a != b);
> >> + * asm volatile ("" : : : "memory"); <-- barrier()
> >> + * return *b;
> >> + * }
> >> + *
> >> + * With gcc 14.2 (arm64):
> >> + *
> >> + * fct_2_volatile_barriers:
> >> + * adrp x0, .LANCHOR0
> >> + * add x0, x0, :lo12:.LANCHOR0
> >> + * .L2:
> >> + * ldr x1, [x0] <-- x1 populated by first load.
> >> + * ldr x2, [x0]
> >> + * cmp x1, x2
> >> + * bne .L2
> >> + * ldr w0, [x1] <-- x1 is used for access which should depend on b.
> >> + * ret
> >> + *
> >> + * On weakly-ordered architectures, this lets CPU speculation use the
> >> + * result from the first load to speculate "ldr w0, [x1]" before
> >> + * "ldr x2, [x0]".
> >> + * Based on the RCU documentation, the control dependency does not
> >> + * prevent the CPU from speculating loads.
> >
> > I'm not sure that example (of something that doesn't work) is really necessary.
> > The simple example of, given:
> > return a == b ? *a : 0;
> > the generated code might speculatively dereference 'b' (not a) before returning
> > zero when the pointers are different.
>
> In the past discussion that led to this new API, AFAIU, Linus made it
> clear that this counter example needs to be in a comment:
I might remember that...
But if you read the proposed comment it starts looking like an example.
It is also very long for the file it is in - even if clearly marked as why
the same effect can't be achieved with barrier().
Maybe the long gory comment belongs in the rst file?
I do wonder if some places need this:
#define OPTIMISER_HIDE_VAL(x) ({ auto _x = x; OPTIMISER_HIDE_VAR(_x); _x; })
Then you could do:
#define ptr_eq(x, y) (OPTIMISER_HIDE_VAL(x) == OPTIMISER_HIDE_VAL(y))
which includes the check that the pointers are the same type.
But it would be more generally useful for hiding constants from the optimiser.
David
>
> https://lore.kernel.org/lkml/CAHk-=wgBgh5U+dyNaN=+XCdcm2OmgSRbcH4Vbtk8i5ZDGwStSA@mail.gmail.com/
>
> This counter-example is what convinced him that this addresses a real
> issue.
>
> Thanks,
>
> Mathieu
>
>
© 2016 - 2026 Red Hat, Inc.