arch/x86/kernel/kvm.c | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-)
From: Li RongQing <lirongqing@baidu.com>
When a vCPU has a dedicated physical CPU, typically, the hypervisor
disables the HLT exit too, rendering the KVM_FEATURE_PV_UNHALT feature
unavailable, and virt_spin_lock_key is expected to be disabled in
this configuration, but:
The problematic execution flow caused the enabled virt_spin_lock_key:
- First check PV_UNHALT
- Then check dedicated CPUs
So change the order:
- First check dedicated CPUs
- Then check PV_UNHALT
This ensures virt_spin_lock_key is disable when dedicated physical
CPUs are available and HLT exit is disabled, and this will gives a
pretty performance boost at high contention level
Signed-off-by: Li RongQing <lirongqing@baidu.com>
---
arch/x86/kernel/kvm.c | 20 ++++++++++----------
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 921c1c7..9cda79f 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -1073,16 +1073,6 @@ static void kvm_wait(u8 *ptr, u8 val)
void __init kvm_spinlock_init(void)
{
/*
- * In case host doesn't support KVM_FEATURE_PV_UNHALT there is still an
- * advantage of keeping virt_spin_lock_key enabled: virt_spin_lock() is
- * preferred over native qspinlock when vCPU is preempted.
- */
- if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)) {
- pr_info("PV spinlocks disabled, no host support\n");
- return;
- }
-
- /*
* Disable PV spinlocks and use native qspinlock when dedicated pCPUs
* are available.
*/
@@ -1101,6 +1091,16 @@ void __init kvm_spinlock_init(void)
goto out;
}
+ /*
+ * In case host doesn't support KVM_FEATURE_PV_UNHALT there is still an
+ * advantage of keeping virt_spin_lock_key enabled: virt_spin_lock() is
+ * preferred over native qspinlock when vCPU is preempted.
+ */
+ if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)) {
+ pr_info("PV spinlocks disabled, no host support\n");
+ return;
+ }
+
pr_info("PV spinlocks enabled\n");
__pv_init_lock_hash();
--
2.9.4
On Fri, Jul 18, 2025, lirongqing wrote: > From: Li RongQing <lirongqing@baidu.com> > > When a vCPU has a dedicated physical CPU, typically, the hypervisor > disables the HLT exit too, But certainly not always. E.g. the hypervisor may disable MWAIT exiting but not HLT exiting, so that the hypervisor can take action if a guest kernel refuses to use MWAIT for whatever reason. I assume native qspinlocks outperform virt_spin_lock() irrespective of HLT exiting when the vCPU has a dedicated pCPU? If so, it's probably worth calling that out in the changelog, e.g. to assuage any fears/concerns about this being undesirable for setups with KVM_HINTS_REALTIME *and* KVM_FEATURE_PV_UNHALT. > rendering the KVM_FEATURE_PV_UNHALT feature unavailable, and > virt_spin_lock_key is expected to be disabled in this configuration, but: > > The problematic execution flow caused the enabled virt_spin_lock_key: > - First check PV_UNHALT > - Then check dedicated CPUs > > So change the order: > - First check dedicated CPUs > - Then check PV_UNHALT > > This ensures virt_spin_lock_key is disable when dedicated physical > CPUs are available and HLT exit is disabled, and this will gives a > pretty performance boost at high contention level > > Signed-off-by: Li RongQing <lirongqing@baidu.com> > --- > arch/x86/kernel/kvm.c | 20 ++++++++++---------- > 1 file changed, 10 insertions(+), 10 deletions(-) > > diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c > index 921c1c7..9cda79f 100644 > --- a/arch/x86/kernel/kvm.c > +++ b/arch/x86/kernel/kvm.c > @@ -1073,16 +1073,6 @@ static void kvm_wait(u8 *ptr, u8 val) > void __init kvm_spinlock_init(void) > { > /* > - * In case host doesn't support KVM_FEATURE_PV_UNHALT there is still an > - * advantage of keeping virt_spin_lock_key enabled: virt_spin_lock() is > - * preferred over native qspinlock when vCPU is preempted. > - */ > - if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)) { > - pr_info("PV spinlocks disabled, no host support\n"); > - return; > - } > - > - /* > * Disable PV spinlocks and use native qspinlock when dedicated pCPUs > * are available. > */ > @@ -1101,6 +1091,16 @@ void __init kvm_spinlock_init(void) > goto out; > } > > + /* > + * In case host doesn't support KVM_FEATURE_PV_UNHALT there is still an > + * advantage of keeping virt_spin_lock_key enabled: virt_spin_lock() is > + * preferred over native qspinlock when vCPU is preempted. > + */ > + if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)) { > + pr_info("PV spinlocks disabled, no host support\n"); > + return; > + } > + > pr_info("PV spinlocks enabled\n"); > > __pv_init_lock_hash(); > -- > 2.9.4 >
>
> On Fri, Jul 18, 2025, lirongqing wrote:
> > From: Li RongQing <lirongqing@baidu.com>
> >
> > When a vCPU has a dedicated physical CPU, typically, the hypervisor
> > disables the HLT exit too,
>
> But certainly not always. E.g. the hypervisor may disable MWAIT exiting but
> not HLT exiting, so that the hypervisor can take action if a guest kernel refuses
> to use MWAIT for whatever reason.
>
> I assume native qspinlocks outperform virt_spin_lock() irrespective of HLT
> exiting when the vCPU has a dedicated pCPU?
"I think this is true. As the comment (KVM: X86: Choose qspinlock when dedicated physical CPUs are available) says:
'PV_DEDICATED = 1, PV_UNHALT = anything: default is qspinlock'.
However, the current code doesn't reflect this. When PV_UNHALT=0, it still uses virt_spin_lock(). My patch is fixing this inconsistency.
commit b2798ba0b8769b42f00899b44a538b5fcecb480d
Author: Wanpeng Li <wanpengli@tencent.com>
Date: Tue Feb 13 09:05:41 2018 +0800
KVM: X86: Choose qspinlock when dedicated physical CPUs are available
Waiman Long mentioned that:
> Generally speaking, unfair lock performs well for VMs with a small
> number of vCPUs. Native qspinlock may perform better than pvqspinlock
> if there is vCPU pinning and there is no vCPU over-commitment.
This patch uses the KVM_HINTS_DEDICATED performance hint, which is
provided by the hypervisor admin, to choose the qspinlock algorithm
when a dedicated physical CPU is available.
PV_DEDICATED = 1, PV_UNHALT = anything: default is qspinlock
PV_DEDICATED = 0, PV_UNHALT = 1: default is Hybrid PV queued/unfair lock
PV_DEDICATED = 0, PV_UNHALT = 0: default is tas
> If so, it's probably worth calling
> that out in the changelog, e.g. to assuage any fears/concerns about this being
> undesirable for setups with KVM_HINTS_REALTIME *and*
> KVM_FEATURE_PV_UNHALT.
>
Ok, I will rewrite the changelog
If you still have concerns, I think we can change the code as below
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 921c1c7..6275d78 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -1078,8 +1078,14 @@ void __init kvm_spinlock_init(void)
* preferred over native qspinlock when vCPU is preempted.
*/
if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)) {
- pr_info("PV spinlocks disabled, no host support\n");
- return;
+ if (kvm_para_has_hint(KVM_HINTS_REALTIME)) {
+ pr_info("PV spinlocks disabled with KVM_HINTS_REALTIME hints\n");
+ goto out;
+ }
+ else {
+ pr_info("PV spinlocks disabled, no host support\n");
+ return;
+ }
}
/*
Thanks
-Li
> > rendering the KVM_FEATURE_PV_UNHALT feature unavailable, and
> > virt_spin_lock_key is expected to be disabled in this configuration, but:
> >
> > The problematic execution flow caused the enabled virt_spin_lock_key:
> > - First check PV_UNHALT
> > - Then check dedicated CPUs
> >
> > So change the order:
> > - First check dedicated CPUs
> > - Then check PV_UNHALT
> >
> > This ensures virt_spin_lock_key is disable when dedicated physical
> > CPUs are available and HLT exit is disabled, and this will gives a
> > pretty performance boost at high contention level
> >
> > Signed-off-by: Li RongQing <lirongqing@baidu.com>
> > ---
> > arch/x86/kernel/kvm.c | 20 ++++++++++----------
> > 1 file changed, 10 insertions(+), 10 deletions(-)
> >
> > diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c index
> > 921c1c7..9cda79f 100644
> > --- a/arch/x86/kernel/kvm.c
> > +++ b/arch/x86/kernel/kvm.c
> > @@ -1073,16 +1073,6 @@ static void kvm_wait(u8 *ptr, u8 val) void
> > __init kvm_spinlock_init(void) {
> > /*
> > - * In case host doesn't support KVM_FEATURE_PV_UNHALT there is still an
> > - * advantage of keeping virt_spin_lock_key enabled: virt_spin_lock() is
> > - * preferred over native qspinlock when vCPU is preempted.
> > - */
> > - if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)) {
> > - pr_info("PV spinlocks disabled, no host support\n");
> > - return;
> > - }
> > -
> > - /*
> > * Disable PV spinlocks and use native qspinlock when dedicated pCPUs
> > * are available.
> > */
> > @@ -1101,6 +1091,16 @@ void __init kvm_spinlock_init(void)
> > goto out;
> > }
> >
> > + /*
> > + * In case host doesn't support KVM_FEATURE_PV_UNHALT there is still an
> > + * advantage of keeping virt_spin_lock_key enabled: virt_spin_lock() is
> > + * preferred over native qspinlock when vCPU is preempted.
> > + */
> > + if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)) {
> > + pr_info("PV spinlocks disabled, no host support\n");
> > + return;
> > + }
> > +
> > pr_info("PV spinlocks enabled\n");
> >
> > __pv_init_lock_hash();
> > --
> > 2.9.4
> >
© 2016 - 2025 Red Hat, Inc.