drivers/firmware/efi/unaccepted_memory.c | 71 ++++++++++++++++++++++-- 1 file changed, 67 insertions(+), 4 deletions(-)
Michael reported soft lockups on a system that has unaccepted memory.
This occurs when a user attempts to allocate and accept memory on
multiple CPUs simultaneously.
The root cause of the issue is that memory acceptance is serialized with
a spinlock, allowing only one CPU to accept memory at a time. The other
CPUs spin and wait for their turn, leading to starvation and soft lockup
reports.
To address this, the code has been modified to release the spinlock
while accepting memory. This allows for parallel memory acceptance on
multiple CPUs.
A newly introduced "accepting_list" keeps track of which memory is
currently being accepted. This is necessary to prevent parallel
acceptance of the same memory block. If a collision occurs, the lock is
released and the process is retried.
Such collisions should rarely occur. The main path for memory acceptance
is the page allocator, which accepts memory in MAX_ORDER chunks. As long
as MAX_ORDER is equal to or larger than the unit_size, collisions will
never occur because the caller fully owns the memory block being
accepted.
Aside from the page allocator, only memblock and deferered_free_range()
accept memory, but this only happens during boot.
The code has been tested with unit_size == 128MiB to trigger collisions
and validate the retry codepath.
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Reported-by: Michael Roth <michael.roth@amd.com
Fixes: 2053bc57f367 ("efi: Add unaccepted memory support")
Cc: <stable@kernel.org>
Reviewed-by: Nikolay Borisov <nik.borisov@suse.com>
---
v2:
- Fix deadlock (Vlastimil);
- Fix comments (Vlastimil);
- s/cond_resched()/cpu_relax()/ -- cond_resched() cannot be called
from atomic context;
---
drivers/firmware/efi/unaccepted_memory.c | 71 ++++++++++++++++++++++--
1 file changed, 67 insertions(+), 4 deletions(-)
diff --git a/drivers/firmware/efi/unaccepted_memory.c b/drivers/firmware/efi/unaccepted_memory.c
index 853f7dc3c21d..fa3363889224 100644
--- a/drivers/firmware/efi/unaccepted_memory.c
+++ b/drivers/firmware/efi/unaccepted_memory.c
@@ -5,9 +5,17 @@
#include <linux/spinlock.h>
#include <asm/unaccepted_memory.h>
-/* Protects unaccepted memory bitmap */
+/* Protects unaccepted memory bitmap and accepting_list */
static DEFINE_SPINLOCK(unaccepted_memory_lock);
+struct accept_range {
+ struct list_head list;
+ unsigned long start;
+ unsigned long end;
+};
+
+static LIST_HEAD(accepting_list);
+
/*
* accept_memory() -- Consult bitmap and accept the memory if needed.
*
@@ -24,6 +32,7 @@ void accept_memory(phys_addr_t start, phys_addr_t end)
{
struct efi_unaccepted_memory *unaccepted;
unsigned long range_start, range_end;
+ struct accept_range range, *entry;
unsigned long flags;
u64 unit_size;
@@ -78,20 +87,74 @@ void accept_memory(phys_addr_t start, phys_addr_t end)
if (end > unaccepted->size * unit_size * BITS_PER_BYTE)
end = unaccepted->size * unit_size * BITS_PER_BYTE;
- range_start = start / unit_size;
-
+ range.start = start / unit_size;
+ range.end = DIV_ROUND_UP(end, unit_size);
+retry:
spin_lock_irqsave(&unaccepted_memory_lock, flags);
+
+ /*
+ * Check if anybody works on accepting the same range of the memory.
+ *
+ * The check is done with unit_size granularity. It is crucial to catch
+ * all accept requests to the same unit_size block, even if they don't
+ * overlap on physical address level.
+ */
+ list_for_each_entry(entry, &accepting_list, list) {
+ if (entry->end < range.start)
+ continue;
+ if (entry->start >= range.end)
+ continue;
+
+ /*
+ * Somebody else accepting the range. Or at least part of it.
+ *
+ * Drop the lock and retry until it is complete.
+ */
+ spin_unlock_irqrestore(&unaccepted_memory_lock, flags);
+
+ /*
+ * The code is reachable from atomic context.
+ * cond_resched() cannot be used.
+ */
+ cpu_relax();
+
+ goto retry;
+ }
+
+ /*
+ * Register that the range is about to be accepted.
+ * Make sure nobody else will accept it.
+ */
+ list_add(&range.list, &accepting_list);
+
+ range_start = range.start;
for_each_set_bitrange_from(range_start, range_end, unaccepted->bitmap,
- DIV_ROUND_UP(end, unit_size)) {
+ range.end) {
unsigned long phys_start, phys_end;
unsigned long len = range_end - range_start;
phys_start = range_start * unit_size + unaccepted->phys_base;
phys_end = range_end * unit_size + unaccepted->phys_base;
+ /*
+ * Keep interrupts disabled until the accept operation is
+ * complete in order to prevent deadlocks.
+ *
+ * Enabling interrupts before calling arch_accept_memory()
+ * creates an opportunity for an interrupt handler to request
+ * acceptance for the same memory. The handler will continuously
+ * spin with interrupts disabled, preventing other task from
+ * making progress with the acceptance process.
+ */
+ spin_unlock(&unaccepted_memory_lock);
+
arch_accept_memory(phys_start, phys_end);
+
+ spin_lock(&unaccepted_memory_lock);
bitmap_clear(unaccepted->bitmap, range_start, len);
}
+
+ list_del(&range.list);
spin_unlock_irqrestore(&unaccepted_memory_lock, flags);
}
--
2.41.0
On Mon, Oct 16, 2023 at 07:31:22PM +0300, Kirill A. Shutemov wrote:
> Michael reported soft lockups on a system that has unaccepted memory.
> This occurs when a user attempts to allocate and accept memory on
> multiple CPUs simultaneously.
>
> The root cause of the issue is that memory acceptance is serialized with
> a spinlock, allowing only one CPU to accept memory at a time. The other
> CPUs spin and wait for their turn, leading to starvation and soft lockup
> reports.
>
> To address this, the code has been modified to release the spinlock
> while accepting memory. This allows for parallel memory acceptance on
> multiple CPUs.
>
> A newly introduced "accepting_list" keeps track of which memory is
> currently being accepted. This is necessary to prevent parallel
> acceptance of the same memory block. If a collision occurs, the lock is
> released and the process is retried.
>
> Such collisions should rarely occur. The main path for memory acceptance
> is the page allocator, which accepts memory in MAX_ORDER chunks. As long
> as MAX_ORDER is equal to or larger than the unit_size, collisions will
> never occur because the caller fully owns the memory block being
> accepted.
>
> Aside from the page allocator, only memblock and deferered_free_range()
> accept memory, but this only happens during boot.
>
> The code has been tested with unit_size == 128MiB to trigger collisions
> and validate the retry codepath.
>
> Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> Reported-by: Michael Roth <michael.roth@amd.com
Tested-by: Michael Roth <michael.roth@amd.com>
This seems to improve things pretty dramatically for me. Previously I
saw soft-lockups with 16 vCPUs and 16 processes faulting into memory,
and now I can do 128+ vCPUs/processes.
I can still trigger soft lock-ups on occassion if the number of processes
faulting in memory exceeds the number of vCPUs available to the guest, but
with a 32 vCPU guest even something like this:
stress --vm 128 --vm-bytes 2G --vm-keep --cpu 255
still seems to avoid the soft lock-up messages. So that's probably well
into "potential future optimization" territory and this patch fixes the
more immediate issues.
Thanks!
-Mike
> Fixes: 2053bc57f367 ("efi: Add unaccepted memory support")
> Cc: <stable@kernel.org>
> Reviewed-by: Nikolay Borisov <nik.borisov@suse.com>
> ---
>
> v2:
> - Fix deadlock (Vlastimil);
> - Fix comments (Vlastimil);
> - s/cond_resched()/cpu_relax()/ -- cond_resched() cannot be called
> from atomic context;
>
On 10/16/23 22:54, Michael Roth wrote:
> On Mon, Oct 16, 2023 at 07:31:22PM +0300, Kirill A. Shutemov wrote:
>> Michael reported soft lockups on a system that has unaccepted memory.
>> This occurs when a user attempts to allocate and accept memory on
>> multiple CPUs simultaneously.
>>
>> The root cause of the issue is that memory acceptance is serialized with
>> a spinlock, allowing only one CPU to accept memory at a time. The other
>> CPUs spin and wait for their turn, leading to starvation and soft lockup
>> reports.
>>
>> To address this, the code has been modified to release the spinlock
>> while accepting memory. This allows for parallel memory acceptance on
>> multiple CPUs.
>>
>> A newly introduced "accepting_list" keeps track of which memory is
>> currently being accepted. This is necessary to prevent parallel
>> acceptance of the same memory block. If a collision occurs, the lock is
>> released and the process is retried.
>>
>> Such collisions should rarely occur. The main path for memory acceptance
>> is the page allocator, which accepts memory in MAX_ORDER chunks. As long
>> as MAX_ORDER is equal to or larger than the unit_size, collisions will
>> never occur because the caller fully owns the memory block being
>> accepted.
>>
>> Aside from the page allocator, only memblock and deferered_free_range()
>> accept memory, but this only happens during boot.
>>
>> The code has been tested with unit_size == 128MiB to trigger collisions
>> and validate the retry codepath.
>>
>> Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
>> Reported-by: Michael Roth <michael.roth@amd.com
>
> Tested-by: Michael Roth <michael.roth@amd.com>
>
> This seems to improve things pretty dramatically for me. Previously I
> saw soft-lockups with 16 vCPUs and 16 processes faulting into memory,
> and now I can do 128+ vCPUs/processes.
>
> I can still trigger soft lock-ups on occassion if the number of processes
> faulting in memory exceeds the number of vCPUs available to the guest, but
> with a 32 vCPU guest even something like this:
>
> stress --vm 128 --vm-bytes 2G --vm-keep --cpu 255
>
> still seems to avoid the soft lock-up messages. So that's probably well
> into "potential future optimization" territory and this patch fixes the
> more immediate issues.
Do you mean that the guest pretends it has more cpus than the host provides
to it? I think such cpu starving configuration is prone to softlockups
already, so it wouldn't be new.
If you mean the guest has as many cpus as the host provides to it, but you
stress with many more than that number of processes, then I wonder how
softlockups would happen due to the extra processes. Since irqs are disabled
through the whole operation, the extra processes can't become scheduled, and
not being scheduled due to overloading doesn't trigger softlockups, hmm...
> Thanks!
>
> -Mike
>
>> Fixes: 2053bc57f367 ("efi: Add unaccepted memory support")
>> Cc: <stable@kernel.org>
>> Reviewed-by: Nikolay Borisov <nik.borisov@suse.com>
>> ---
>>
>> v2:
>> - Fix deadlock (Vlastimil);
>> - Fix comments (Vlastimil);
>> - s/cond_resched()/cpu_relax()/ -- cond_resched() cannot be called
>> from atomic context;
>>
On Mon, Oct 16, 2023 at 07:31:22PM +0300, Kirill A. Shutemov wrote: > v2: > - Fix deadlock (Vlastimil); > - Fix comments (Vlastimil); > - s/cond_resched()/cpu_relax()/ -- cond_resched() cannot be called > from atomic context; Isn't there an implicit cpu_relax() while we're spinning? Does this really accomplish anything? > +retry: > spin_lock_irqsave(&unaccepted_memory_lock, flags); [...] > + spin_unlock_irqrestore(&unaccepted_memory_lock, flags); > + > + /* > + * The code is reachable from atomic context. > + * cond_resched() cannot be used. > + */ > + cpu_relax(); > + > + goto retry;
On Mon, Oct 16, 2023 at 06:55:41PM +0100, Matthew Wilcox wrote: > On Mon, Oct 16, 2023 at 07:31:22PM +0300, Kirill A. Shutemov wrote: > > v2: > > - Fix deadlock (Vlastimil); > > - Fix comments (Vlastimil); > > - s/cond_resched()/cpu_relax()/ -- cond_resched() cannot be called > > from atomic context; > > Isn't there an implicit cpu_relax() while we're spinning? Does this > really accomplish anything? You are right. It is useless. I will drop it in v3. -- Kiryl Shutsemau / Kirill A. Shutemov
On Mon, 16 Oct 2023 at 23:39, Kirill A. Shutemov <kirill.shutemov@linux.intel.com> wrote: > > On Mon, Oct 16, 2023 at 06:55:41PM +0100, Matthew Wilcox wrote: > > On Mon, Oct 16, 2023 at 07:31:22PM +0300, Kirill A. Shutemov wrote: > > > v2: > > > - Fix deadlock (Vlastimil); > > > - Fix comments (Vlastimil); > > > - s/cond_resched()/cpu_relax()/ -- cond_resched() cannot be called > > > from atomic context; > > > > Isn't there an implicit cpu_relax() while we're spinning? Does this > > really accomplish anything? > > You are right. It is useless. I will drop it in v3. > I can drop that bit when applying the patch. One question I have is whether the sequence spin_lock_irqsave(&unaccepted_memory_lock, flags); ... spin_unlock(&unaccepted_memory_lock); arch_accept_memory(phys_start, phys_end); spin_lock(&unaccepted_memory_lock); ... spin_unlock_irqrestore(&unaccepted_memory_lock, flags); is considered sound and is supported by all architectures?
On Tue, Oct 17, 2023 at 09:42:13AM +0200, Ard Biesheuvel wrote: > One question I have is whether the sequence > > spin_lock_irqsave(&unaccepted_memory_lock, flags); > ... > spin_unlock(&unaccepted_memory_lock); > arch_accept_memory(phys_start, phys_end); > spin_lock(&unaccepted_memory_lock); > ... > spin_unlock_irqrestore(&unaccepted_memory_lock, flags); > > is considered sound and is supported by all architectures? Yes.
On Tue, 17 Oct 2023 at 12:17, Peter Zijlstra <peterz@infradead.org> wrote: > > On Tue, Oct 17, 2023 at 09:42:13AM +0200, Ard Biesheuvel wrote: > > > One question I have is whether the sequence > > > > spin_lock_irqsave(&unaccepted_memory_lock, flags); > > ... > > spin_unlock(&unaccepted_memory_lock); > > arch_accept_memory(phys_start, phys_end); > > spin_lock(&unaccepted_memory_lock); > > ... > > spin_unlock_irqrestore(&unaccepted_memory_lock, flags); > > > > is considered sound and is supported by all architectures? > > Yes. Thanks for the clarification I've queued this up now (with the cpu_relax() removed)
On Tue, Oct 17, 2023 at 09:42:13AM +0200, Ard Biesheuvel wrote: > On Mon, 16 Oct 2023 at 23:39, Kirill A. Shutemov > <kirill.shutemov@linux.intel.com> wrote: > > > > On Mon, Oct 16, 2023 at 06:55:41PM +0100, Matthew Wilcox wrote: > > > On Mon, Oct 16, 2023 at 07:31:22PM +0300, Kirill A. Shutemov wrote: > > > > v2: > > > > - Fix deadlock (Vlastimil); > > > > - Fix comments (Vlastimil); > > > > - s/cond_resched()/cpu_relax()/ -- cond_resched() cannot be called > > > > from atomic context; > > > > > > Isn't there an implicit cpu_relax() while we're spinning? Does this > > > really accomplish anything? > > > > You are right. It is useless. I will drop it in v3. > > > > I can drop that bit when applying the patch. > > One question I have is whether the sequence > > spin_lock_irqsave(&unaccepted_memory_lock, flags); > ... > spin_unlock(&unaccepted_memory_lock); > arch_accept_memory(phys_start, phys_end); > spin_lock(&unaccepted_memory_lock); > ... > spin_unlock_irqrestore(&unaccepted_memory_lock, flags); > > is considered sound and is supported by all architectures? I am not an locking expert and only tested it on x86. But what potential issue do you see? -- Kiryl Shutsemau / Kirill A. Shutemov
On Tue, 17 Oct 2023 at 11:44, Kirill A. Shutemov <kirill.shutemov@linux.intel.com> wrote: > > On Tue, Oct 17, 2023 at 09:42:13AM +0200, Ard Biesheuvel wrote: > > On Mon, 16 Oct 2023 at 23:39, Kirill A. Shutemov > > <kirill.shutemov@linux.intel.com> wrote: > > > > > > On Mon, Oct 16, 2023 at 06:55:41PM +0100, Matthew Wilcox wrote: > > > > On Mon, Oct 16, 2023 at 07:31:22PM +0300, Kirill A. Shutemov wrote: > > > > > v2: > > > > > - Fix deadlock (Vlastimil); > > > > > - Fix comments (Vlastimil); > > > > > - s/cond_resched()/cpu_relax()/ -- cond_resched() cannot be called > > > > > from atomic context; > > > > > > > > Isn't there an implicit cpu_relax() while we're spinning? Does this > > > > really accomplish anything? > > > > > > You are right. It is useless. I will drop it in v3. > > > > > > > I can drop that bit when applying the patch. > > > > One question I have is whether the sequence > > > > spin_lock_irqsave(&unaccepted_memory_lock, flags); > > ... > > spin_unlock(&unaccepted_memory_lock); > > arch_accept_memory(phys_start, phys_end); > > spin_lock(&unaccepted_memory_lock); > > ... > > spin_unlock_irqrestore(&unaccepted_memory_lock, flags); > > > > is considered sound and is supported by all architectures? > > I am not an locking expert and only tested it on x86. But what potential > issue do you see? > Not sure. It just looks slightly out of place, and I am curious whether all architectures tolerate this asymmetric use.
On 10/16/23 18:31, Kirill A. Shutemov wrote:
> Michael reported soft lockups on a system that has unaccepted memory.
> This occurs when a user attempts to allocate and accept memory on
> multiple CPUs simultaneously.
>
> The root cause of the issue is that memory acceptance is serialized with
> a spinlock, allowing only one CPU to accept memory at a time. The other
> CPUs spin and wait for their turn, leading to starvation and soft lockup
> reports.
>
> To address this, the code has been modified to release the spinlock
> while accepting memory. This allows for parallel memory acceptance on
> multiple CPUs.
>
> A newly introduced "accepting_list" keeps track of which memory is
> currently being accepted. This is necessary to prevent parallel
> acceptance of the same memory block. If a collision occurs, the lock is
> released and the process is retried.
>
> Such collisions should rarely occur. The main path for memory acceptance
> is the page allocator, which accepts memory in MAX_ORDER chunks. As long
> as MAX_ORDER is equal to or larger than the unit_size, collisions will
> never occur because the caller fully owns the memory block being
> accepted.
>
> Aside from the page allocator, only memblock and deferered_free_range()
> accept memory, but this only happens during boot.
>
> The code has been tested with unit_size == 128MiB to trigger collisions
> and validate the retry codepath.
>
> Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> Reported-by: Michael Roth <michael.roth@amd.com
> Fixes: 2053bc57f367 ("efi: Add unaccepted memory support")
> Cc: <stable@kernel.org>
> Reviewed-by: Nikolay Borisov <nik.borisov@suse.com>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
<snip>
> + range_start = range.start;
> for_each_set_bitrange_from(range_start, range_end, unaccepted->bitmap,
> - DIV_ROUND_UP(end, unit_size)) {
> + range.end) {
> unsigned long phys_start, phys_end;
> unsigned long len = range_end - range_start;
>
> phys_start = range_start * unit_size + unaccepted->phys_base;
> phys_end = range_end * unit_size + unaccepted->phys_base;
>
> + /*
> + * Keep interrupts disabled until the accept operation is
> + * complete in order to prevent deadlocks.
> + *
> + * Enabling interrupts before calling arch_accept_memory()
> + * creates an opportunity for an interrupt handler to request
> + * acceptance for the same memory. The handler will continuously
> + * spin with interrupts disabled, preventing other task from
> + * making progress with the acceptance process.
> + */
AFAIU on PREEMPT_RT the spin_lock_irqsave() doesn't disable interrupts, so
this does not leave them disabled. But it also shouldn't be a risk of
deadlock because the interrupt handlers are themselves preemptible. The
latency might be bad as the cpu_relax() retry loop will not cause the task
everyone might be waiting for to be prioritised, but I guess it's not a big
issue as anyone with RT requirements probably won't use unaccepted memory in
the first place, and as you mention hitting the retry loop after boot in a
normal configuration should be pretty much never.
> + spin_unlock(&unaccepted_memory_lock);
> +
> arch_accept_memory(phys_start, phys_end);
> +
> + spin_lock(&unaccepted_memory_lock);
> bitmap_clear(unaccepted->bitmap, range_start, len);
> }
> +
> + list_del(&range.list);
> spin_unlock_irqrestore(&unaccepted_memory_lock, flags);
> }
>
© 2016 - 2025 Red Hat, Inc.