arch/x86/kernel/cpu/intel.c | 20 ++++++++++++++++---- 1 file changed, 16 insertions(+), 4 deletions(-)
If the warn mode with disabled mitigation mode is used, then on each
CPU where the split lock occurred detection will be disabled in order to
make progress and delayed work will be scheduled, which then will enable
detection back. Now it turns out that all CPUs use one global delayed
work structure. This leads to the fact that if a split lock occurs on
several CPUs at the same time (within 2 jiffies), only one CPU will
schedule delayed work, but the rest will not. The return value of
schedule_delayed_work_on() would have shown this, but it is not checked
in the code.
A diagram that can help to understand the bug reproduction:
https://lore.kernel.org/all/2cd54041-253b-4e78-b8ea-dbe9b884ff9b@yandex-team.ru/
In order to fix the warn mode with disabled mitigation mode, delayed work
has to be a per-CPU.
v3 -> v2:
* place and time of the per-CPU structure initialization were changed.
initcall doesn't seem to be a good place for it, so deferred
initialization is used.
Fixes: 727209376f49 ("x86/split_lock: Add sysctl to control the misery mode")
Signed-off-by: Maksim Davydov <davydov-max@yandex-team.ru>
---
arch/x86/kernel/cpu/intel.c | 20 ++++++++++++++++----
1 file changed, 16 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
index e7656cbef68d..b288ef4f1ad0 100644
--- a/arch/x86/kernel/cpu/intel.c
+++ b/arch/x86/kernel/cpu/intel.c
@@ -1071,7 +1071,13 @@ static void __split_lock_reenable(struct work_struct *work)
{
sld_update_msr(true);
}
-static DECLARE_DELAYED_WORK(sl_reenable, __split_lock_reenable);
+/*
+ * In order for each CPU to schedule itself delayed work independently of the
+ * others, delayed work struct should be per-CPU. This is not required when
+ * sysctl_sld_mitigate is enabled because of the semaphore, that limits
+ * the number of simultaneously scheduled delayed works to 1.
+ */
+static DEFINE_PER_CPU(struct delayed_work, sl_reenable);
/*
* If a CPU goes offline with pending delayed work to re-enable split lock
@@ -1092,7 +1098,7 @@ static int splitlock_cpu_offline(unsigned int cpu)
static void split_lock_warn(unsigned long ip)
{
- struct delayed_work *work;
+ struct delayed_work *work = NULL;
int cpu;
if (!current->reported_split_lock)
@@ -1114,11 +1120,17 @@ static void split_lock_warn(unsigned long ip)
if (down_interruptible(&buslock_sem) == -EINTR)
return;
work = &sl_reenable_unlock;
- } else {
- work = &sl_reenable;
}
cpu = get_cpu();
+
+ if (!work) {
+ work = this_cpu_ptr(&sl_reenable);
+ /* Deferred initialization of per-CPU struct */
+ if (!work->work.func)
+ INIT_DELAYED_WORK(work, __split_lock_reenable);
+ }
+
schedule_delayed_work_on(cpu, work, 2);
/* Disable split lock detection on this CPU to make progress */
--
2.34.1
On 13/11/2024 11:23, Maksim Davydov wrote: > If the warn mode with disabled mitigation mode is used, then on each > CPU where the split lock occurred detection will be disabled in order to > make progress and delayed work will be scheduled, which then will enable > detection back. Now it turns out that all CPUs use one global delayed > work structure. This leads to the fact that if a split lock occurs on > several CPUs at the same time (within 2 jiffies), only one CPU will > schedule delayed work, but the rest will not. The return value of > schedule_delayed_work_on() would have shown this, but it is not checked > in the code. > > A diagram that can help to understand the bug reproduction: > https://lore.kernel.org/all/2cd54041-253b-4e78-b8ea-dbe9b884ff9b@yandex-team.ru/ > > In order to fix the warn mode with disabled mitigation mode, delayed work > has to be a per-CPU. > > v3 -> v2: > * place and time of the per-CPU structure initialization were changed. > initcall doesn't seem to be a good place for it, so deferred > initialization is used. > > Fixes: 727209376f49 ("x86/split_lock: Add sysctl to control the misery mode") > Signed-off-by: Maksim Davydov <davydov-max@yandex-team.ru> > --- > arch/x86/kernel/cpu/intel.c | 20 ++++++++++++++++---- > 1 file changed, 16 insertions(+), 4 deletions(-) Hi Maksim, thanks for resubmitting again. I think that is indeed a valid fix, but what I've also noticed is that recently (as in this week) the code changed from the intel.c file to a more generic one, since AMD is enabling split lock detection in their CPUs apparently [0]. So, I'd suggest you to rebase against 6.13-rc, that would likely increase the chances of a merge. Once you do that, I can try to test it as well, though I don't personally have an Intel CPU with that feature (but some friends have it). Cheers, Guilherme [0] https://lore.kernel.org/r/ZzuBNj4JImJGUNJc@gmail.com/
© 2016 - 2024 Red Hat, Inc.