From nobody Mon Oct 6 13:20:19 2025 Received: from baidu.com (mx22.baidu.com [220.181.50.185]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 11A8F1F4289; Mon, 21 Jul 2025 02:27:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=220.181.50.185 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753064867; cv=none; b=LgSQjazyPnlnNB8qzsfe1VCx7e+fDKR5RbMHK6Zmv3+TpoxadFniAv36yTWfP6shUkULpZyghz+7ZeUW6o9fcTVM/TiViqIP9wC7PkwF8EpdNfjRNOOiDTF3cRsf7hIdmfCEqPUCRwAjJ5A+c8IkqOkzMYN83fEN/irLGjlB8og= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753064867; c=relaxed/simple; bh=x+zj8fFpS5eWHmQYb3/NsvA8xbXRIWkSvclvHT/Xf3g=; h=From:To:CC:Subject:Date:Message-ID:References:In-Reply-To: Content-Type:MIME-Version; b=rFZNTJvHWI8oXt/daeBjS/2/OBZ7vow4/585JFwn2Auqy5EQTl/Uu6c77qNxvNiPUsfT5LWgBGyFe/a/cIy+3uSuPaDj9IlUNmkWsmkaEUat3H/Qn1izuEFU3JHQkhv5WT73eTiwDrnA6hMLnP1EaTuolL6DwNgFKDl4gMK1HC0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=baidu.com; spf=pass smtp.mailfrom=baidu.com; arc=none smtp.client-ip=220.181.50.185 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=baidu.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=baidu.com From: "Li,Rongqing" To: Sean Christopherson CC: "pbonzini@redhat.com" , "vkuznets@redhat.com" , "tglx@linutronix.de" , "mingo@redhat.com" , "bp@alien8.de" , "dave.hansen@linux.intel.com" , "x86@kernel.org" , "hpa@zytor.com" , "kvm@vger.kernel.org" , "linux-kernel@vger.kernel.org" Subject: =?gb2312?B?tPC4tDogWz8/Pz9dIFJlOiBbUEFUQ0hdIHg4Ni9rdm06IFJlb3JkZXIgUFYg?= =?gb2312?Q?spinlock_checks_for_dedicated_CPU_case?= Thread-Topic: [????] Re: [PATCH] x86/kvm: Reorder PV spinlock checks for dedicated CPU case Thread-Index: AQHb+ebbkcsK2yip8kqdhSixfTeD3A== Date: Mon, 21 Jul 2025 02:26:25 +0000 Message-ID: References: <20250718094936.5283-1-lirongqing@baidu.com> In-Reply-To: Accept-Language: zh-CN, en-US Content-Language: zh-CN X-MS-Has-Attach: X-MS-TNEF-Correlator: Content-Transfer-Encoding: quoted-printable Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-FEAS-Client-IP: 172.31.50.46 X-FE-Policy-ID: 52:10:53:SYSTEM Content-Type: text/plain; charset="utf-8" >=20 > On Fri, Jul 18, 2025, lirongqing wrote: > > From: Li RongQing > > > > When a vCPU has a dedicated physical CPU, typically, the hypervisor > > disables the HLT exit too, >=20 > But certainly not always. E.g. the hypervisor may disable MWAIT exiting = but > not HLT exiting, so that the hypervisor can take action if a guest kernel= refuses > to use MWAIT for whatever reason. >=20 > I assume native qspinlocks outperform virt_spin_lock() irrespective of HLT > exiting when the vCPU has a dedicated pCPU?=20 "I think this is true. As the comment (KVM: X86: Choose qspinlock when dedi= cated physical CPUs are available) says: 'PV_DEDICATED =3D 1, PV_UNHALT =3D anything: default is qspinlock'.=20 However, the current code doesn't reflect this. When PV_UNHALT=3D0, it stil= l uses virt_spin_lock(). My patch is fixing this inconsistency. commit b2798ba0b8769b42f00899b44a538b5fcecb480d Author: Wanpeng Li Date: Tue Feb 13 09:05:41 2018 +0800 KVM: X86: Choose qspinlock when dedicated physical CPUs are available Waiman Long mentioned that: > Generally speaking, unfair lock performs well for VMs with a small > number of vCPUs. Native qspinlock may perform better than pvqspinlock > if there is vCPU pinning and there is no vCPU over-commitment. This patch uses the KVM_HINTS_DEDICATED performance hint, which is provided by the hypervisor admin, to choose the qspinlock algorithm when a dedicated physical CPU is available. PV_DEDICATED =3D 1, PV_UNHALT =3D anything: default is qspinlock PV_DEDICATED =3D 0, PV_UNHALT =3D 1: default is Hybrid PV queued/unfair= lock PV_DEDICATED =3D 0, PV_UNHALT =3D 0: default is tas > If so, it's probably worth calling > that out in the changelog, e.g. to assuage any fears/concerns about this = being > undesirable for setups with KVM_HINTS_REALTIME *and* > KVM_FEATURE_PV_UNHALT. >=20 Ok, I will rewrite the changelog If you still have concerns, I think we can change the code as below diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c index 921c1c7..6275d78 100644 --- a/arch/x86/kernel/kvm.c +++ b/arch/x86/kernel/kvm.c @@ -1078,8 +1078,14 @@ void __init kvm_spinlock_init(void) * preferred over native qspinlock when vCPU is preempted. */ if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)) { - pr_info("PV spinlocks disabled, no host support\n"); - return; + if (kvm_para_has_hint(KVM_HINTS_REALTIME)) { + pr_info("PV spinlocks disabled with KVM_HINTS_REALT= IME hints\n"); + goto out; + } + else { + pr_info("PV spinlocks disabled, no host support\n"); + return; + } } /* Thanks -Li > > rendering the KVM_FEATURE_PV_UNHALT feature unavailable, and > > virt_spin_lock_key is expected to be disabled in this configuration, bu= t: > > > > The problematic execution flow caused the enabled virt_spin_lock_key: > > - First check PV_UNHALT > > - Then check dedicated CPUs > > > > So change the order: > > - First check dedicated CPUs > > - Then check PV_UNHALT > > > > This ensures virt_spin_lock_key is disable when dedicated physical > > CPUs are available and HLT exit is disabled, and this will gives a > > pretty performance boost at high contention level > > > > Signed-off-by: Li RongQing > > --- > > arch/x86/kernel/kvm.c | 20 ++++++++++---------- > > 1 file changed, 10 insertions(+), 10 deletions(-) > > > > diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c index > > 921c1c7..9cda79f 100644 > > --- a/arch/x86/kernel/kvm.c > > +++ b/arch/x86/kernel/kvm.c > > @@ -1073,16 +1073,6 @@ static void kvm_wait(u8 *ptr, u8 val) void > > __init kvm_spinlock_init(void) { > > /* > > - * In case host doesn't support KVM_FEATURE_PV_UNHALT there is still = an > > - * advantage of keeping virt_spin_lock_key enabled: virt_spin_lock() = is > > - * preferred over native qspinlock when vCPU is preempted. > > - */ > > - if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)) { > > - pr_info("PV spinlocks disabled, no host support\n"); > > - return; > > - } > > - > > - /* > > * Disable PV spinlocks and use native qspinlock when dedicated pCPUs > > * are available. > > */ > > @@ -1101,6 +1091,16 @@ void __init kvm_spinlock_init(void) > > goto out; > > } > > > > + /* > > + * In case host doesn't support KVM_FEATURE_PV_UNHALT there is still = an > > + * advantage of keeping virt_spin_lock_key enabled: virt_spin_lock() = is > > + * preferred over native qspinlock when vCPU is preempted. > > + */ > > + if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)) { > > + pr_info("PV spinlocks disabled, no host support\n"); > > + return; > > + } > > + > > pr_info("PV spinlocks enabled\n"); > > > > __pv_init_lock_hash(); > > -- > > 2.9.4 > >