console_force_unlock() might result in subsequent ASSERT() triggering
when CONFIG_DEBUG_LOCKS was active. Avoid that by calling
spin_debug_disable() in console_force_unlock() and make the spinlock
debug assertions trigger only if spin_debug was active.
Signed-off-by: Juergen Gross <jgross@suse.com>
---
xen/common/spinlock.c | 2 +-
xen/drivers/char/console.c | 1 +
2 files changed, 2 insertions(+), 1 deletion(-)
diff --git a/xen/common/spinlock.c b/xen/common/spinlock.c
index ed69f0a4d2..2a06de3e6a 100644
--- a/xen/common/spinlock.c
+++ b/xen/common/spinlock.c
@@ -85,7 +85,7 @@ static void got_lock(union lock_debug *debug)
static void rel_lock(union lock_debug *debug)
{
- ASSERT(debug->cpu == smp_processor_id());
+ ASSERT(atomic_read(&spin_debug) <= 0 || debug->cpu == smp_processor_id());
debug->cpu = SPINLOCK_NO_CPU;
}
diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
index b31d789a5d..4bcbbfa7d6 100644
--- a/xen/drivers/char/console.c
+++ b/xen/drivers/char/console.c
@@ -1077,6 +1077,7 @@ void console_unlock_recursive_irqrestore(unsigned long flags)
void console_force_unlock(void)
{
watchdog_disable();
+ spin_debug_disable();
spin_lock_init(&console_lock);
serial_force_unlock(sercon_handle);
console_locks_busted = 1;
--
2.16.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
On 08/01/2020 10:43, Juergen Gross wrote:
> console_force_unlock() might result in subsequent ASSERT() triggering
> when CONFIG_DEBUG_LOCKS was active. Avoid that by calling
> spin_debug_disable() in console_force_unlock() and make the spinlock
> debug assertions trigger only if spin_debug was active.
>
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
> xen/common/spinlock.c | 2 +-
> xen/drivers/char/console.c | 1 +
> 2 files changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/xen/common/spinlock.c b/xen/common/spinlock.c
> index ed69f0a4d2..2a06de3e6a 100644
> --- a/xen/common/spinlock.c
> +++ b/xen/common/spinlock.c
> @@ -85,7 +85,7 @@ static void got_lock(union lock_debug *debug)
>
> static void rel_lock(union lock_debug *debug)
> {
> - ASSERT(debug->cpu == smp_processor_id());
> + ASSERT(atomic_read(&spin_debug) <= 0 || debug->cpu == smp_processor_id());
Personally, I think the logic would be easier to follow as:
if ( atomic_read(&spin_debug) )
ASSERT(debug->cpu == smp_processor_id());
Otherwise, LGTM. Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>.
Can be fixed on commit if you agree.
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
On 08.01.20 11:58, Andrew Cooper wrote:
> On 08/01/2020 10:43, Juergen Gross wrote:
>> console_force_unlock() might result in subsequent ASSERT() triggering
>> when CONFIG_DEBUG_LOCKS was active. Avoid that by calling
>> spin_debug_disable() in console_force_unlock() and make the spinlock
>> debug assertions trigger only if spin_debug was active.
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>> ---
>> xen/common/spinlock.c | 2 +-
>> xen/drivers/char/console.c | 1 +
>> 2 files changed, 2 insertions(+), 1 deletion(-)
>>
>> diff --git a/xen/common/spinlock.c b/xen/common/spinlock.c
>> index ed69f0a4d2..2a06de3e6a 100644
>> --- a/xen/common/spinlock.c
>> +++ b/xen/common/spinlock.c
>> @@ -85,7 +85,7 @@ static void got_lock(union lock_debug *debug)
>>
>> static void rel_lock(union lock_debug *debug)
>> {
>> - ASSERT(debug->cpu == smp_processor_id());
>> + ASSERT(atomic_read(&spin_debug) <= 0 || debug->cpu == smp_processor_id());
>
> Personally, I think the logic would be easier to follow as:
>
> if ( atomic_read(&spin_debug) )
> ASSERT(debug->cpu == smp_processor_id());
>
> Otherwise, LGTM. Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>.
> Can be fixed on commit if you agree.
>
No objection from me.
Juergen
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
© 2016 - 2025 Red Hat, Inc.