From nobody Thu Sep 18 21:40:35 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5324CC4332F for ; Thu, 1 Dec 2022 23:28:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231940AbiLAX15 (ORCPT ); Thu, 1 Dec 2022 18:27:57 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55964 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231875AbiLAX10 (ORCPT ); Thu, 1 Dec 2022 18:27:26 -0500 Received: from mail-pj1-x1049.google.com (mail-pj1-x1049.google.com [IPv6:2607:f8b0:4864:20::1049]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AF709CC64E for ; Thu, 1 Dec 2022 15:27:14 -0800 (PST) Received: by mail-pj1-x1049.google.com with SMTP id l2-20020a17090add8200b00218daa1a812so2952599pjv.3 for ; Thu, 01 Dec 2022 15:27:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=PueUj9DN6y/B+t9EKbOQHEz5+gc+C1Dj1im/eZPScN4=; b=MS4rtQrmXPYim7WgQgTNM4fYna3dJdawLyxdI3gxmMYsAenENi/HoieSKBu4uCq4DY btGKvi4pzD+yD18bJlTr1pY+t7g7c1ZcDRdBWVFw0Ch5PmWQGP5l2BLkQx21HI+ugfze +HgPsEsOcYYQn2TTomEnrZSQL1LG+p1oNm58avOaTcv/nMMpWW8u44c8VlhNcp7hKpnv +1R2r7nqOOjK82AyKFm7xg6vbEeOYD4FEnU7AsGk7L65aC1LdgE+N6a8gBZYv1zrQiZ0 ZT6aQxoincXgbjSvwlt9EuygYRjWnFdbD4gSsVajlwz9bNW2/XvbwVqQ3mb3kXY5jujb BRpQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=PueUj9DN6y/B+t9EKbOQHEz5+gc+C1Dj1im/eZPScN4=; b=ooTeUyX5v2uE1KLSVmQaq/ePM+NEI9SsiYJGTcsskFoT1svtzjH7Oafl0cEeoboNb9 xioOPJ7VQnxQjnmr2S4s6eQ1wM9Va3bU0Ln0R2SpfubLiCAMR/joPgHsIJuRd0hHxZRo laqU9L6Px3Gkj3DnIrItdpfwWOTD+SWPlE6CaDkCuTGk1jiPlP7LbqZYM/7vAkqhrsJY 3Pe8/hx1P1f+Hi5qPPjR8laAGHrUNZpnaxubjbr72//OHO3UO8XnUxeUh5jrEV3isncP DvF2SJJ+8hol6ngQYZphdhspMdrpCgPxlA7hOh/BdLugm+siHBL2tfQWhrECUkoifJVb wcVA== X-Gm-Message-State: ANoB5plytlwamOIzWOlNqr8ZBhMGTUVBnKLYlAtr8aNRx5Y+Rx7Cr4d+ 3MuKxVNkrP1M3h7YC/apOcnoiNbM7ng= X-Google-Smtp-Source: AA0mqf5tkYWPCh1BUcoB0u0U/ejTOx5LRiJtDeLILxQsyS796anrXkrYxrmQtli0SxALjOp6WWJHleCTUsc= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a65:60c9:0:b0:470:48c1:aae9 with SMTP id r9-20020a6560c9000000b0047048c1aae9mr43149036pgv.4.1669937234233; Thu, 01 Dec 2022 15:27:14 -0800 (PST) Reply-To: Sean Christopherson Date: Thu, 1 Dec 2022 23:26:46 +0000 In-Reply-To: <20221201232655.290720-1-seanjc@google.com> Mime-Version: 1.0 References: <20221201232655.290720-1-seanjc@google.com> X-Mailer: git-send-email 2.39.0.rc0.267.gcb52ba06e7-goog Message-ID: <20221201232655.290720-8-seanjc@google.com> Subject: [PATCH 07/16] x86/reboot: Hoist "disable virt" helpers above "emergency reboot" path From: Sean Christopherson To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, Sean Christopherson , Paolo Bonzini Cc: "H. Peter Anvin" , linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Andrew Cooper Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Move the various "disable virtualization" helpers above the emergency reboot path so that emergency_reboot_disable_virtualization() can be stubbed out in a future patch if neither KVM_INTEL nor KVM_AMD is enabled, i.e. if there is no in-tree user of CPU virtualization. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/include/asm/reboot.h | 4 +- arch/x86/kernel/reboot.c | 86 +++++++++++++++++------------------ 2 files changed, 45 insertions(+), 45 deletions(-) diff --git a/arch/x86/include/asm/reboot.h b/arch/x86/include/asm/reboot.h index dc2b77e6704b..2be5b89c9a05 100644 --- a/arch/x86/include/asm/reboot.h +++ b/arch/x86/include/asm/reboot.h @@ -29,8 +29,10 @@ void __noreturn machine_real_restart(unsigned int type); typedef void (cpu_emergency_virt_cb)(void); void cpu_emergency_register_virt_callback(cpu_emergency_virt_cb *callback); void cpu_emergency_unregister_virt_callback(cpu_emergency_virt_cb *callbac= k); -#endif void cpu_emergency_disable_virtualization(void); +#else +static inline void cpu_emergency_disable_virtualization(void) {} +#endif =20 typedef void (*nmi_shootdown_cb)(int, struct pt_regs*); void nmi_panic_self_stop(struct pt_regs *regs); diff --git a/arch/x86/kernel/reboot.c b/arch/x86/kernel/reboot.c index 3eec3cdbc054..a006ddaa1405 100644 --- a/arch/x86/kernel/reboot.c +++ b/arch/x86/kernel/reboot.c @@ -530,6 +530,48 @@ static inline void kb_wait(void) =20 static inline void nmi_shootdown_cpus_on_restart(void); =20 +#if IS_ENABLED(CONFIG_KVM_INTEL) || IS_ENABLED(CONFIG_KVM_AMD) +/* RCU-protected callback to disable virtualization prior to reboot. */ +static cpu_emergency_virt_cb __rcu *cpu_emergency_virt_callback; + +void cpu_emergency_register_virt_callback(cpu_emergency_virt_cb *callback) +{ + if (WARN_ON_ONCE(rcu_access_pointer(cpu_emergency_virt_callback))) + return; + + rcu_assign_pointer(cpu_emergency_virt_callback, callback); +} +EXPORT_SYMBOL_GPL(cpu_emergency_register_virt_callback); + +void cpu_emergency_unregister_virt_callback(cpu_emergency_virt_cb *callbac= k) +{ + if (WARN_ON_ONCE(rcu_access_pointer(cpu_emergency_virt_callback) !=3D cal= lback)) + return; + + rcu_assign_pointer(cpu_emergency_virt_callback, NULL); + synchronize_rcu(); +} +EXPORT_SYMBOL_GPL(cpu_emergency_unregister_virt_callback); + +/* + * Disable virtualization, i.e. VMX or SVM, to ensure INIT is recognized d= uring + * reboot. VMX blocks INIT if the CPU is post-VMXON, and SVM blocks INIT = if + * GIF=3D0, i.e. if the crash occurred between CLGI and STGI. + */ +void cpu_emergency_disable_virtualization(void) +{ + cpu_emergency_virt_cb *callback; + + lockdep_assert_irqs_disabled(); + + rcu_read_lock(); + callback =3D rcu_dereference(cpu_emergency_virt_callback); + if (callback) + callback(); + rcu_read_unlock(); +} +#endif /* CONFIG_KVM_INTEL || CONFIG_KVM_AMD */ + static void emergency_reboot_disable_virtualization(void) { /* Just make sure we won't change CPUs while doing this */ @@ -787,53 +829,9 @@ void machine_crash_shutdown(struct pt_regs *regs) } #endif =20 -#if IS_ENABLED(CONFIG_KVM_INTEL) || IS_ENABLED(CONFIG_KVM_AMD) -/* RCU-protected callback to disable virtualization prior to reboot. */ -static cpu_emergency_virt_cb __rcu *cpu_emergency_virt_callback; - -void cpu_emergency_register_virt_callback(cpu_emergency_virt_cb *callback) -{ - if (WARN_ON_ONCE(rcu_access_pointer(cpu_emergency_virt_callback))) - return; - - rcu_assign_pointer(cpu_emergency_virt_callback, callback); -} -EXPORT_SYMBOL_GPL(cpu_emergency_register_virt_callback); - -void cpu_emergency_unregister_virt_callback(cpu_emergency_virt_cb *callbac= k) -{ - if (WARN_ON_ONCE(rcu_access_pointer(cpu_emergency_virt_callback) !=3D cal= lback)) - return; - - rcu_assign_pointer(cpu_emergency_virt_callback, NULL); - synchronize_rcu(); -} -EXPORT_SYMBOL_GPL(cpu_emergency_unregister_virt_callback); -#endif - /* This is the CPU performing the emergency shutdown work. */ int crashing_cpu =3D -1; =20 -/* - * Disable virtualization, i.e. VMX or SVM, to ensure INIT is recognized d= uring - * reboot. VMX blocks INIT if the CPU is post-VMXON, and SVM blocks INIT = if - * GIF=3D0, i.e. if the crash occurred between CLGI and STGI. - */ -void cpu_emergency_disable_virtualization(void) -{ - lockdep_assert_irqs_disabled(); - -#if IS_ENABLED(CONFIG_KVM_INTEL) || IS_ENABLED(CONFIG_KVM_AMD) - cpu_emergency_virt_cb *callback; - - rcu_read_lock(); - callback =3D rcu_dereference(cpu_emergency_virt_callback); - if (callback) - callback(); - rcu_read_unlock(); -#endif -} - #if defined(CONFIG_SMP) =20 static nmi_shootdown_cb shootdown_callback; --=20 2.39.0.rc0.267.gcb52ba06e7-goog