arch/x86/kernel/kvm.c | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-)
From: Li RongQing <lirongqing@baidu.com>
The commit b2798ba0b876 ("KVM: X86: Choose qspinlock when dedicated
physical CPUs are available") states that when PV_DEDICATED=1
(vCPU has dedicated pCPU), qspinlock should be preferred regardless of
PV_UNHALT. However, the current implementation doesn't reflect this: when
PV_UNHALT=0, we still use virt_spin_lock() even with dedicated pCPUs.
This is suboptimal because:
1. Native qspinlocks should outperform virt_spin_lock() for dedicated
vCPUs irrespective of HALT exiting
2. virt_spin_lock() should only be preferred when vCPUs may be preempted
(non-dedicated case)
So reorder the PV spinlock checks to:
1. First handle dedicated pCPU case (disable virt_spin_lock_key)
2. Second check single CPU, and nopvspin configuration
3. Only then check PV_UNHALT support
This ensures we always use native qspinlock for dedicated vCPUs, delivering
pretty performance gains at high contention levels.
Signed-off-by: Li RongQing <lirongqing@baidu.com>
---
diff with v1: rewrite the changelog
arch/x86/kernel/kvm.c | 20 ++++++++++----------
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 921c1c7..9cda79f 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -1073,16 +1073,6 @@ static void kvm_wait(u8 *ptr, u8 val)
void __init kvm_spinlock_init(void)
{
/*
- * In case host doesn't support KVM_FEATURE_PV_UNHALT there is still an
- * advantage of keeping virt_spin_lock_key enabled: virt_spin_lock() is
- * preferred over native qspinlock when vCPU is preempted.
- */
- if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)) {
- pr_info("PV spinlocks disabled, no host support\n");
- return;
- }
-
- /*
* Disable PV spinlocks and use native qspinlock when dedicated pCPUs
* are available.
*/
@@ -1101,6 +1091,16 @@ void __init kvm_spinlock_init(void)
goto out;
}
+ /*
+ * In case host doesn't support KVM_FEATURE_PV_UNHALT there is still an
+ * advantage of keeping virt_spin_lock_key enabled: virt_spin_lock() is
+ * preferred over native qspinlock when vCPU is preempted.
+ */
+ if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)) {
+ pr_info("PV spinlocks disabled, no host support\n");
+ return;
+ }
+
pr_info("PV spinlocks enabled\n");
__pv_init_lock_hash();
--
2.9.4
On Tue, 22 Jul 2025 19:00:05 +0800, lirongqing wrote: > The commit b2798ba0b876 ("KVM: X86: Choose qspinlock when dedicated > physical CPUs are available") states that when PV_DEDICATED=1 > (vCPU has dedicated pCPU), qspinlock should be preferred regardless of > PV_UNHALT. However, the current implementation doesn't reflect this: when > PV_UNHALT=0, we still use virt_spin_lock() even with dedicated pCPUs. > > This is suboptimal because: > 1. Native qspinlocks should outperform virt_spin_lock() for dedicated > vCPUs irrespective of HALT exiting > 2. virt_spin_lock() should only be preferred when vCPUs may be preempted > (non-dedicated case) > > [...] Applied to kvm-x86 guest, thanks! [1/1] x86/kvm: Prefer native qspinlock for dedicated vCPUs irrespective of PV_UNHALT https://github.com/kvm-x86/linux/commit/960550503965 -- https://github.com/kvm-x86/linux/tree/next
On Tue, Jul 22, 2025, lirongqing wrote: > From: Li RongQing <lirongqing@baidu.com> > > The commit b2798ba0b876 ("KVM: X86: Choose qspinlock when dedicated > physical CPUs are available") states that when PV_DEDICATED=1 > (vCPU has dedicated pCPU), qspinlock should be preferred regardless of > PV_UNHALT. However, the current implementation doesn't reflect this: when > PV_UNHALT=0, we still use virt_spin_lock() even with dedicated pCPUs. > > This is suboptimal because: > 1. Native qspinlocks should outperform virt_spin_lock() for dedicated > vCPUs irrespective of HALT exiting > 2. virt_spin_lock() should only be preferred when vCPUs may be preempted > (non-dedicated case) > > So reorder the PV spinlock checks to: > 1. First handle dedicated pCPU case (disable virt_spin_lock_key) > 2. Second check single CPU, and nopvspin configuration > 3. Only then check PV_UNHALT support > > This ensures we always use native qspinlock for dedicated vCPUs, delivering > pretty performance gains at high contention levels. > > Signed-off-by: Li RongQing <lirongqing@baidu.com> > --- Reviewed-by: Sean Christopherson <seanjc@google.com>
On 7/22/2025 7:00 PM, lirongqing wrote: > From: Li RongQing <lirongqing@baidu.com> > > The commit b2798ba0b876 ("KVM: X86: Choose qspinlock when dedicated > physical CPUs are available") states that when PV_DEDICATED=1 > (vCPU has dedicated pCPU), qspinlock should be preferred regardless of > PV_UNHALT. However, the current implementation doesn't reflect this: when > PV_UNHALT=0, we still use virt_spin_lock() even with dedicated pCPUs. > > This is suboptimal because: > 1. Native qspinlocks should outperform virt_spin_lock() for dedicated > vCPUs irrespective of HALT exiting > 2. virt_spin_lock() should only be preferred when vCPUs may be preempted > (non-dedicated case) > > So reorder the PV spinlock checks to: > 1. First handle dedicated pCPU case (disable virt_spin_lock_key) > 2. Second check single CPU, and nopvspin configuration > 3. Only then check PV_UNHALT support > > This ensures we always use native qspinlock for dedicated vCPUs, delivering > pretty performance gains at high contention levels. > > Signed-off-by: Li RongQing <lirongqing@baidu.com> > > diff with v1: rewrite the changelog > > arch/x86/kernel/kvm.c | 20 ++++++++++---------- > 1 file changed, 10 insertions(+), 10 deletions(-) > > diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c > index 921c1c7..9cda79f 100644 > --- a/arch/x86/kernel/kvm.c > +++ b/arch/x86/kernel/kvm.c > @@ -1073,16 +1073,6 @@ static void kvm_wait(u8 *ptr, u8 val) > void __init kvm_spinlock_init(void) > { > /* > - * In case host doesn't support KVM_FEATURE_PV_UNHALT there is still an > - * advantage of keeping virt_spin_lock_key enabled: virt_spin_lock() is > - * preferred over native qspinlock when vCPU is preempted. > - */ > - if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)) { > - pr_info("PV spinlocks disabled, no host support\n"); > - return; > - } > - > - /* > * Disable PV spinlocks and use native qspinlock when dedicated pCPUs > * are available. > */ > @@ -1101,6 +1091,16 @@ void __init kvm_spinlock_init(void) > goto out; > } > > + /* > + * In case host doesn't support KVM_FEATURE_PV_UNHALT there is still an > + * advantage of keeping virt_spin_lock_key enabled: virt_spin_lock() is > + * preferred over native qspinlock when vCPU is preempted. > + */ > + if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)) { > + pr_info("PV spinlocks disabled, no host support\n"); > + return; > + } > + > pr_info("PV spinlocks enabled\n"); > > __pv_init_lock_hash(); For non-overcommit VM, we may add `-overcommit cpu-pm=on` options to qemu-kvm and let guest to handle idle by itself and reduce the latency. Current kernel will fallback to virt_spin_lock, even kvm-hint-dedicated is provided. With this patch, it can fix this problem and use mcs queue spinlock for better performance. Tested-by: Wangyang Guo <wangyang.guo@intel.com>
© 2016 - 2025 Red Hat, Inc.