From nobody Sun May 5 07:20:27 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=csail.mit.edu Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 167384901425267.8857785208869; Sun, 15 Jan 2023 22:03:34 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.478317.741414 (Exim 4.92) (envelope-from ) id 1pHIa5-00080Q-1G; Mon, 16 Jan 2023 06:02:53 +0000 Received: by outflank-mailman (output) from mailman id 478317.741414; Mon, 16 Jan 2023 06:02:53 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pHIa4-00080J-Uy; Mon, 16 Jan 2023 06:02:52 +0000 Received: by outflank-mailman (input) for mailman id 478317; Mon, 16 Jan 2023 06:02:51 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1pHIa3-00080D-Dy for xen-devel@lists.xenproject.org; Mon, 16 Jan 2023 06:02:51 +0000 Received: from outgoing2021.csail.mit.edu (outgoing2021.csail.mit.edu [128.30.2.78]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 65d3b344-9563-11ed-b8d0-410ff93cb8f0; Mon, 16 Jan 2023 07:02:47 +0100 (CET) Received: from [128.177.82.146] (helo=srivatsa-dev.eng.vmware.com) by outgoing2021.csail.mit.edu with esmtpa (Exim 4.95) (envelope-from ) id 1pHIZd-00EV5m-KZ; Mon, 16 Jan 2023 01:02:25 -0500 X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 65d3b344-9563-11ed-b8d0-410ff93cb8f0 From: "Srivatsa S. Bhat" To: linux-kernel@vger.kernel.org Cc: amakhalov@vmware.com, ganb@vmware.com, ankitja@vmware.com, bordoloih@vmware.com, keerthanak@vmware.com, blamoreaux@vmware.com, namit@vmware.com, srivatsa@csail.mit.edu, Peter Zijlstra , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , "Rafael J. Wysocki" , "Paul E. McKenney" , Wyes Karny , Lewis Caroll , Tom Lendacky , Juergen Gross , x86@kernel.org, VMware PV-Drivers Reviewers , virtualization@lists.linux-foundation.org, kvm@vger.kernel.org, xen-devel@lists.xenproject.org Subject: [PATCH v2] x86/hotplug: Do not put offline vCPUs in mwait idle state Date: Sun, 15 Jan 2023 22:01:34 -0800 Message-Id: <20230116060134.80259-1-srivatsa@csail.mit.edu> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1673849015261100001 Content-Type: text/plain; charset="utf-8" From: "Srivatsa S. Bhat (VMware)" Under hypervisors that support mwait passthrough, a vCPU in mwait CPU-idle state remains in guest context (instead of yielding to the hypervisor via VMEXIT), which helps speed up wakeups from idle. However, this runs into problems with CPU hotplug, because the Linux CPU offline path prefers to put the vCPU-to-be-offlined in mwait state, whenever mwait is available. As a result, since a vCPU in mwait remains in guest context and does not yield to the hypervisor, an offline vCPU *appears* to be 100% busy as viewed from the host, which prevents the hypervisor from running other vCPUs or workloads on the corresponding pCPU. [ Note that such a vCPU is not actually busy spinning though; it remains in mwait idle state in the guest ]. Fix this by preventing the use of mwait idle state in the vCPU offline play_dead() path for any hypervisor, even if mwait support is available. Suggested-by: Peter Zijlstra (Intel) Signed-off-by: Srivatsa S. Bhat (VMware) Cc: Thomas Gleixner Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Borislav Petkov Cc: Dave Hansen Cc: "H. Peter Anvin" Cc: "Rafael J. Wysocki" Cc: "Paul E. McKenney" Cc: Wyes Karny Cc: Lewis Caroll Cc: Tom Lendacky Cc: Alexey Makhalov Cc: Juergen Gross Cc: x86@kernel.org Cc: VMware PV-Drivers Reviewers Cc: virtualization@lists.linux-foundation.org Cc: kvm@vger.kernel.org Cc: xen-devel@lists.xenproject.org Reviewed-by: Juergen Gross --- v1: https://lore.kernel.org/lkml/165843627080.142207.12667479241667142176.s= tgit@csail.mit.edu/ arch/x86/kernel/smpboot.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c index 55cad72715d9..125a5d4bfded 100644 --- a/arch/x86/kernel/smpboot.c +++ b/arch/x86/kernel/smpboot.c @@ -1763,6 +1763,15 @@ static inline void mwait_play_dead(void) return; if (!this_cpu_has(X86_FEATURE_CLFLUSH)) return; + + /* + * Do not use mwait in CPU offline play_dead if running under + * any hypervisor, to make sure that the offline vCPU actually + * yields to the hypervisor (which may not happen otherwise if + * the hypervisor supports mwait passthrough). + */ + if (this_cpu_has(X86_FEATURE_HYPERVISOR)) + return; if (__this_cpu_read(cpu_info.cpuid_level) < CPUID_MWAIT_LEAF) return; =20 --=20 2.25.1