From nobody Sun May 10 09:54:41 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 76CB2C433F5 for ; Wed, 11 May 2022 23:43:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349434AbiEKXnr (ORCPT ); Wed, 11 May 2022 19:43:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51850 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1349432AbiEKXnl (ORCPT ); Wed, 11 May 2022 19:43:41 -0400 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9FD0D60DBC for ; Wed, 11 May 2022 16:43:40 -0700 (PDT) Received: by mail-pg1-x54a.google.com with SMTP id e185-20020a6369c2000000b003d822d1900eso1698894pgc.19 for ; Wed, 11 May 2022 16:43:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=TAxiS/Ab5c7/hNi33nkS+3tREbvoK0JgsSTtBLjXVkU=; b=h89nNOgesg/Zplpl0ZYbjqjWzs5L+D0otRohm8XU+6Hv6kzBUyXDVofmGhGX1Bh+ex gVQRpNKzTjeMu7tX07SCLPcafM1dM1Bg0CwodQ7CKyDUZ9a48n5EXexk0vr9F+EHv6kE a9FuMi472lHKFi0mLSZrFF3RvMwe8VyBaIHtpjMFMjAdYjLleF3F8bbqLoVQmShpXux1 DcLepZ6yhB1NSFvRrWy28V97zD0BsEnJbcqWYCXeHk8n26PMKvp2k6xd33to1nSAAKEv yJtZMQBMEK1xtHT02KDKbSg2Y8AfQbbYcRbVh5+0wyZHhXHXtsaGED2lDteu4vA9uOJ1 TRcg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=TAxiS/Ab5c7/hNi33nkS+3tREbvoK0JgsSTtBLjXVkU=; b=Huw7UVh5mHnMTjGAz3JGchI0vkC0gi/cMwNIgpkN7rlP3Ma3AM9frzpTh+CdNtjWj3 aR27q7aPtQz2LgFAVgqaFoVqEtQL+TEVysmn3cDV6kDsGK7e5fZ9zkmuYiRVEEKwqNMI uPZl4EvVJkdo6YLxsbBBqz0NtgCjrRiD5pMz0fdw2PjdBm1y/OJMa4/NdInzw9U6yJOJ rvnigHnP4hnZcgMa8Qo5tMYpBWfRaTU1p7SCzU0C+M4KydA3k6igL3wDCqlEUPlMr9MB 2yQ2aK5XVRfr/TP+sujtTIOmd3Xk7I7yYoViE81FXavuuCORLzqyWj+plFPmrV7Dkldz kf2Q== X-Gm-Message-State: AOAM531lhxlQoyOz1kjIsvyRWvWtMixzP25Bm2HN5BwdKSl2wgv+1qjl fcILSQ6C3bHuDGfuMNHCkuGXs9VfgRU= X-Google-Smtp-Source: ABdhPJzSb+Lwdgqy/6LSwc2c3Q6c+j5VBj5Kf+iEbER8+WRuCWXi7dyura/z7bCijxTQFdxL07NwUPaTf8A= X-Received: from seanjc.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e5]) (user=seanjc job=sendgmr) by 2002:a62:15d1:0:b0:50d:3db1:babd with SMTP id 200-20020a6215d1000000b0050d3db1babdmr27415479pfv.19.1652312620124; Wed, 11 May 2022 16:43:40 -0700 (PDT) Reply-To: Sean Christopherson Date: Wed, 11 May 2022 23:43:31 +0000 In-Reply-To: <20220511234332.3654455-1-seanjc@google.com> Message-Id: <20220511234332.3654455-2-seanjc@google.com> Mime-Version: 1.0 References: <20220511234332.3654455-1-seanjc@google.com> X-Mailer: git-send-email 2.36.0.512.ge40c2bad7a-goog Subject: [PATCH 1/2] x86/crash: Disable virt in core NMI crash handler to avoid double list_add From: Sean Christopherson To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org Cc: "H. Peter Anvin" , linux-kernel@vger.kernel.org, "Guilherme G . Piccoli" , Vitaly Kuznetsov , Paolo Bonzini , Sean Christopherson Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Disable virtualization in crash_nmi_callback() and skip the requested NMI shootdown if a shootdown has already occurred, i.e. a callback has been registered. The NMI crash shootdown path doesn't play nice with multiple invocations, e.g. attempting to register the NMI handler multiple times will trigger a double list_add() and hang the system (in addition to multiple other issues). If "crash_kexec_post_notifiers" is specified on the kernel command line, panic() will invoke crash_smp_send_stop() and result in a second call to nmi_shootdown_cpus() during native_machine_emergency_restart(). Invoke the callback _before_ disabling virtualization, as the current VMCS needs to be cleared before doing VMXOFF. Note, this results in a subtle change in ordering between disabling virtualization and stopping Intel PT on the responding CPUs. While VMX and Intel PT do interact, VMXOFF and writes to MSR_IA32_RTIT_CTL do not induce faults between one another, which is all that matters when panicking. WARN if nmi_shootdown_cpus() is called a second time with anything other than the reboot path's "nop" handler, as bailing means the requested isn't being invoked. Punt true handling of multiple shootdown callbacks until there's an actual use case for doing so (beyond disabling virtualization). Extract the disabling logic to a common helper to deduplicate code, and to prepare for doing the shootdown in the emergency reboot path if SVM is supported. Note, prior to commit ed72736183c4 ("x86/reboot: Force all cpus to exit VMX root if VMX is supported"), nmi_shootdown_cpus() was subtly protected against a second invocation by a cpu_vmx_enabled() check as the kdump handler would disable VMX if it ran first. Fixes: ed72736183c4 ("x86/reboot: Force all cpus to exit VMX root if VMX is= supported) Cc: stable@vger.kernel.org Reported-and-tested-by: Guilherme G. Piccoli Cc: Vitaly Kuznetsov Cc: Paolo Bonzini Link: https://lore.kernel.org/all/20220427224924.592546-2-gpiccoli@igalia.c= om Signed-off-by: Sean Christopherson Reviewed-by: Vitaly Kuznetsov --- arch/x86/include/asm/reboot.h | 1 + arch/x86/kernel/crash.c | 16 +-------------- arch/x86/kernel/reboot.c | 38 ++++++++++++++++++++++++++++++++--- 3 files changed, 37 insertions(+), 18 deletions(-) diff --git a/arch/x86/include/asm/reboot.h b/arch/x86/include/asm/reboot.h index 04c17be9b5fd..8f2da36435a6 100644 --- a/arch/x86/include/asm/reboot.h +++ b/arch/x86/include/asm/reboot.h @@ -25,6 +25,7 @@ void __noreturn machine_real_restart(unsigned int type); #define MRR_BIOS 0 #define MRR_APM 1 =20 +void cpu_crash_disable_virtualization(void); typedef void (*nmi_shootdown_cb)(int, struct pt_regs*); void nmi_panic_self_stop(struct pt_regs *regs); void nmi_shootdown_cpus(nmi_shootdown_cb callback); diff --git a/arch/x86/kernel/crash.c b/arch/x86/kernel/crash.c index e8326a8d1c5d..fe0cf83843ba 100644 --- a/arch/x86/kernel/crash.c +++ b/arch/x86/kernel/crash.c @@ -81,15 +81,6 @@ static void kdump_nmi_callback(int cpu, struct pt_regs *= regs) */ cpu_crash_vmclear_loaded_vmcss(); =20 - /* Disable VMX or SVM if needed. - * - * We need to disable virtualization on all CPUs. - * Having VMX or SVM enabled on any CPU may break rebooting - * after the kdump kernel has finished its task. - */ - cpu_emergency_vmxoff(); - cpu_emergency_svm_disable(); - /* * Disable Intel PT to stop its logging */ @@ -148,12 +139,7 @@ void native_machine_crash_shutdown(struct pt_regs *reg= s) */ cpu_crash_vmclear_loaded_vmcss(); =20 - /* Booting kdump kernel with VMX or SVM enabled won't work, - * because (among other limitations) we can't disable paging - * with the virt flags. - */ - cpu_emergency_vmxoff(); - cpu_emergency_svm_disable(); + cpu_crash_disable_virtualization(); =20 /* * Disable Intel PT to stop its logging diff --git a/arch/x86/kernel/reboot.c b/arch/x86/kernel/reboot.c index fa700b46588e..f9543a4e9b09 100644 --- a/arch/x86/kernel/reboot.c +++ b/arch/x86/kernel/reboot.c @@ -528,9 +528,9 @@ static inline void kb_wait(void) } } =20 -static void vmxoff_nmi(int cpu, struct pt_regs *regs) +static void nmi_shootdown_nop(int cpu, struct pt_regs *regs) { - cpu_emergency_vmxoff(); + /* Nothing to do, the NMI shootdown handler disables virtualization. */ } =20 /* Use NMIs as IPIs to tell all CPUs to disable virtualization */ @@ -554,7 +554,7 @@ static void emergency_vmx_disable_all(void) __cpu_emergency_vmxoff(); =20 /* Halt and exit VMX root operation on the other CPUs. */ - nmi_shootdown_cpus(vmxoff_nmi); + nmi_shootdown_cpus(nmi_shootdown_nop); } } =20 @@ -802,6 +802,18 @@ static nmi_shootdown_cb shootdown_callback; static atomic_t waiting_for_crash_ipi; static int crash_ipi_issued; =20 +void cpu_crash_disable_virtualization(void) +{ + /* + * Disable virtualization, i.e. VMX or SVM, so that INIT is recognized + * during reboot. VMX blocks INIT if the CPU is post-VMXON, and SVM + * blocks INIT if GIF=3D0. Note, CLGI #UDs if SVM isn't enabled, so it's + * easier to just disable SVM unconditionally. + */ + cpu_emergency_vmxoff(); + cpu_emergency_svm_disable(); +} + static int crash_nmi_callback(unsigned int val, struct pt_regs *regs) { int cpu; @@ -819,6 +831,12 @@ static int crash_nmi_callback(unsigned int val, struct= pt_regs *regs) =20 shootdown_callback(cpu, regs); =20 + /* + * Prepare the CPU for reboot _after_ invoking the callback so that the + * callback can safely use virtualization instructions, e.g. VMCLEAR. + */ + cpu_crash_disable_virtualization(); + atomic_dec(&waiting_for_crash_ipi); /* Assume hlt works */ halt(); @@ -840,6 +858,20 @@ void nmi_shootdown_cpus(nmi_shootdown_cb callback) unsigned long msecs; local_irq_disable(); =20 + /* + * Invoking multiple callbacks is not currently supported, registering + * the NMI handler twice will cause a list_add() double add BUG(). + * The exception is the "nop" handler in the emergency reboot path, + * which can run after e.g. kdump's shootdown. Do nothing if the crash + * handler has already run, i.e. has already prepared other CPUs, the + * reboot path doesn't have any work of its to do, it just needs to + * ensure all CPUs have prepared for reboot. + */ + if (shootdown_callback) { + WARN_ON_ONCE(callback !=3D nmi_shootdown_nop); + return; + } + /* Make a note of crashing cpu. Will be used in NMI callback. */ crashing_cpu =3D safe_smp_processor_id(); =20 --=20 2.36.0.512.ge40c2bad7a-goog From nobody Sun May 10 09:54:41 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6DC61C433EF for ; Wed, 11 May 2022 23:44:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349449AbiEKXn7 (ORCPT ); Wed, 11 May 2022 19:43:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52442 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1349466AbiEKXn5 (ORCPT ); Wed, 11 May 2022 19:43:57 -0400 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F2FCB70911 for ; Wed, 11 May 2022 16:43:52 -0700 (PDT) Received: by mail-pg1-x54a.google.com with SMTP id bj12-20020a056a02018c00b003a9eebaad34so1698844pgb.10 for ; Wed, 11 May 2022 16:43:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=JtrG0Prmhr3vTXS8nWuYYpYreCaFB4jpf2e8MRiZPjM=; b=e91y02DAWbubPKpcnfJaF8RXew9U7HmcVhFnnRSorWEJIF+xtGw9gYd/o6jByKTmH5 mYhRViuefbFj2gkEG3pbnMANbfFNqjz6DJ4NDUJJ1pIbf4HoX8aHS7NLQfuUJqKqvMhv Sq8yDrFKqM/UX4zAafWN8RBlKpUoT6K3Cb3VdHRoqXi47QWvj9G1llm4281LvPyY0w+u w3f8vlx5KtNSjdnhoMSnFPWx0Uz5zL2o8YWbnltfEG1e/VI816mpO+N5AMY9vJMnvZ7T Z0KbSevGrEsInCZ20NWaNekREBrEgrNYPEiDf4U7QtgKIflx6bMSPyl9AVDHye3si88T dHXQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=JtrG0Prmhr3vTXS8nWuYYpYreCaFB4jpf2e8MRiZPjM=; b=4344cBVPJSSLplmFa4+jvjB9uLLSKdn9waRckRfxFelRpqGWJtFnmncCdLdHvOy0fn oLuSAzAxfQYHaV+ytO5npen7RqHbT/w9Em7tLpb55uUJ05602Xh0da6vIM1Zkb+r1wh6 XEBHO01caSdrvRsroKnSsrfuDWLaAboMwr18450QdhKhoNYAJrwnUkfdSfkFDPoo3h3N TiVLqives4Lzlw3TSgcyvc0jz1jhclimzneslI/LTGvRmFpYlZUQEmNPAsEmGT/gDlgX hDe1jQEhst7CHupK6YHVfUqn7+Dg+KwxRaD1P7sPdqPBVdkmoWp/K4p2BQiqu6bqLhje iw9A== X-Gm-Message-State: AOAM530XnXqYPmroNItP4caqqIGVw0zBdnTltpVYYDD4Jn4TH4C71oNs OJLW0InYRni5BFHOX0RT4JOa4z5cop0= X-Google-Smtp-Source: ABdhPJxHLQMii3mrtX56N6obwbLOhtAoUbfM/fvBvnNgeIwM71qKfeQHFW/gH0KGVpy0KZsdnj9C7wEdOo8= X-Received: from seanjc.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e5]) (user=seanjc job=sendgmr) by 2002:a62:bd14:0:b0:50d:4bec:ff78 with SMTP id a20-20020a62bd14000000b0050d4becff78mr27275096pff.71.1652312621887; Wed, 11 May 2022 16:43:41 -0700 (PDT) Reply-To: Sean Christopherson Date: Wed, 11 May 2022 23:43:32 +0000 In-Reply-To: <20220511234332.3654455-1-seanjc@google.com> Message-Id: <20220511234332.3654455-3-seanjc@google.com> Mime-Version: 1.0 References: <20220511234332.3654455-1-seanjc@google.com> X-Mailer: git-send-email 2.36.0.512.ge40c2bad7a-goog Subject: [PATCH 2/2] x86/reboot: Disable virtualization in an emergency if SVM is supported From: Sean Christopherson To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org Cc: "H. Peter Anvin" , linux-kernel@vger.kernel.org, "Guilherme G . Piccoli" , Vitaly Kuznetsov , Paolo Bonzini , Sean Christopherson Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Disable SVM on all CPUs via NMI shootdown during an emergency reboot. Like VMX, SVM can block INIT and thus prevent bringing up other CPUs via INIT-SIPI-SIPI. Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson Reviewed-by: Vitaly Kuznetsov --- arch/x86/kernel/reboot.c | 26 ++++++++++++++------------ 1 file changed, 14 insertions(+), 12 deletions(-) diff --git a/arch/x86/kernel/reboot.c b/arch/x86/kernel/reboot.c index f9543a4e9b09..33c1f4883b27 100644 --- a/arch/x86/kernel/reboot.c +++ b/arch/x86/kernel/reboot.c @@ -533,27 +533,29 @@ static void nmi_shootdown_nop(int cpu, struct pt_regs= *regs) /* Nothing to do, the NMI shootdown handler disables virtualization. */ } =20 -/* Use NMIs as IPIs to tell all CPUs to disable virtualization */ -static void emergency_vmx_disable_all(void) +static void emergency_reboot_disable_virtualization(void) { /* Just make sure we won't change CPUs while doing this */ local_irq_disable(); =20 /* - * Disable VMX on all CPUs before rebooting, otherwise we risk hanging - * the machine, because the CPU blocks INIT when it's in VMX root. + * Disable virtualization on all CPUs before rebooting to avoid hanging + * the system, as VMX and SVM block INIT when running in the host * * We can't take any locks and we may be on an inconsistent state, so - * use NMIs as IPIs to tell the other CPUs to exit VMX root and halt. + * use NMIs as IPIs to tell the other CPUs to disable VMX/SVM and halt. * - * Do the NMI shootdown even if VMX if off on _this_ CPU, as that - * doesn't prevent a different CPU from being in VMX root operation. + * Do the NMI shootdown even if virtualization is off on _this_ CPU, as + * other CPUs may have virtualization enabled. */ - if (cpu_has_vmx()) { - /* Safely force _this_ CPU out of VMX root operation. */ - __cpu_emergency_vmxoff(); + if (cpu_has_vmx() || cpu_has_svm(NULL)) { + /* Safely force _this_ CPU out of VMX/SVM operation. */ + if (cpu_has_vmx()) + __cpu_emergency_vmxoff(); + else + cpu_emergency_svm_disable(); =20 - /* Halt and exit VMX root operation on the other CPUs. */ + /* Disable VMX/SVM and halt on other CPUs. */ nmi_shootdown_cpus(nmi_shootdown_nop); } } @@ -590,7 +592,7 @@ static void native_machine_emergency_restart(void) unsigned short mode; =20 if (reboot_emergency) - emergency_vmx_disable_all(); + emergency_reboot_disable_virtualization(); =20 tboot_shutdown(TB_SHUTDOWN_REBOOT); =20 --=20 2.36.0.512.ge40c2bad7a-goog From nobody Sun May 10 09:54:41 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 58CA5C433FE for ; Tue, 17 May 2022 07:37:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230461AbiEQHhP (ORCPT ); Tue, 17 May 2022 03:37:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36542 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241667AbiEQHfq (ORCPT ); Tue, 17 May 2022 03:35:46 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 84E6E49C9A; Tue, 17 May 2022 00:34:17 -0700 (PDT) Date: Tue, 17 May 2022 07:34:14 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1652772855; h=from:from:sender:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=cGGPCoT30zsHoUM7qYF9vcnNujH3Hey0sMxDFkGEqkI=; b=inYQMdnQZzE3vnRrvSO+PMY2/HlWPmsrY/sA/Dvt06pk+IH+6RztNzp0GFkdLSsF2NNJgM LPDWpiNXp6lsnOOGxe1B8h6tJfAcRIbftAfgUQwo1aDO8qAI3yI7l/xMOhtKdT5TlaqbY5 +hU7AsMoMEtAUBpMNvI+rY6L0zRCwTSSxZyARTfPqxMhbh6RKNX6DV8s/wX7RO7oG238cz djnYBqCuWH0VKK970DCq25H5Khj29nJSf4uWO3vhZPMxYMEo1tqdHWmWZWx2SNolqpuTGq 3niuv3TJml4IilkKnPWXQCzDfMM2aK0yHvn1QTzmmDozAIwWJ3ubehHFVriIYg== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1652772855; h=from:from:sender:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=cGGPCoT30zsHoUM7qYF9vcnNujH3Hey0sMxDFkGEqkI=; b=xiDAFAm3hH7eBs58Xp40/7Y7p3DXKRjyzfNF60fH1P4YIGUlPL9S4TNxHUp6vs8KGudbYF NlxKhhbynSUH7BDg== From: "tip-bot2 for Thomas Gleixner" Sender: tip-bot2@linutronix.de Reply-to: linux-kernel@vger.kernel.org To: linux-tip-commits@vger.kernel.org Subject: [tip: x86/core] x86/nmi: Make register_nmi_handler() more robust Cc: Sean Christopherson , Thomas Gleixner , Borislav Petkov , x86@kernel.org, linux-kernel@vger.kernel.org In-Reply-To: <20220511234332.3654455-1-seanjc@google.com> References: <20220511234332.3654455-1-seanjc@google.com> MIME-Version: 1.0 Message-ID: <165277285410.4207.10970267068162746336.tip-bot2@tip-bot2> Robot-ID: Robot-Unsubscribe: Contact to get blacklisted from these emails Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The following commit has been merged into the x86/core branch of tip: Commit-ID: a7fed5c0431dbfa707037848830f980e0f93cfb3 Gitweb: https://git.kernel.org/tip/a7fed5c0431dbfa707037848830f980e0= f93cfb3 Author: Thomas Gleixner AuthorDate: Sun, 15 May 2022 13:39:34 +02:00 Committer: Borislav Petkov CommitterDate: Tue, 17 May 2022 09:25:25 +02:00 x86/nmi: Make register_nmi_handler() more robust register_nmi_handler() has no sanity check whether a handler has been registered already. Such an unintended double-add leads to list corruption and hard to diagnose problems during the next NMI handling. Init the list head in the static NMI action struct and check it for being empty in register_nmi_handler(). [ bp: Fixups. ] Reported-by: Sean Christopherson Signed-off-by: Thomas Gleixner Signed-off-by: Borislav Petkov Link: https://lore.kernel.org/lkml/20220511234332.3654455-1-seanjc@google.c= om --- arch/x86/include/asm/nmi.h | 1 + arch/x86/kernel/nmi.c | 12 ++++++++---- 2 files changed, 9 insertions(+), 4 deletions(-) diff --git a/arch/x86/include/asm/nmi.h b/arch/x86/include/asm/nmi.h index 1cb9c17..5c5f1e5 100644 --- a/arch/x86/include/asm/nmi.h +++ b/arch/x86/include/asm/nmi.h @@ -47,6 +47,7 @@ struct nmiaction { #define register_nmi_handler(t, fn, fg, n, init...) \ ({ \ static struct nmiaction init fn##_na =3D { \ + .list =3D LIST_HEAD_INIT(fn##_na.list), \ .handler =3D (fn), \ .name =3D (n), \ .flags =3D (fg), \ diff --git a/arch/x86/kernel/nmi.c b/arch/x86/kernel/nmi.c index e73f7df..cec0bfa 100644 --- a/arch/x86/kernel/nmi.c +++ b/arch/x86/kernel/nmi.c @@ -157,7 +157,7 @@ int __register_nmi_handler(unsigned int type, struct nm= iaction *action) struct nmi_desc *desc =3D nmi_to_desc(type); unsigned long flags; =20 - if (!action->handler) + if (WARN_ON_ONCE(!action->handler || !list_empty(&action->list))) return -EINVAL; =20 raw_spin_lock_irqsave(&desc->lock, flags); @@ -177,7 +177,7 @@ int __register_nmi_handler(unsigned int type, struct nm= iaction *action) list_add_rcu(&action->list, &desc->head); else list_add_tail_rcu(&action->list, &desc->head); -=09 + raw_spin_unlock_irqrestore(&desc->lock, flags); return 0; } @@ -186,7 +186,7 @@ EXPORT_SYMBOL(__register_nmi_handler); void unregister_nmi_handler(unsigned int type, const char *name) { struct nmi_desc *desc =3D nmi_to_desc(type); - struct nmiaction *n; + struct nmiaction *n, *found =3D NULL; unsigned long flags; =20 raw_spin_lock_irqsave(&desc->lock, flags); @@ -200,12 +200,16 @@ void unregister_nmi_handler(unsigned int type, const = char *name) WARN(in_nmi(), "Trying to free NMI (%s) from NMI context!\n", n->name); list_del_rcu(&n->list); + found =3D n; break; } } =20 raw_spin_unlock_irqrestore(&desc->lock, flags); - synchronize_rcu(); + if (found) { + synchronize_rcu(); + INIT_LIST_HEAD(&found->list); + } } EXPORT_SYMBOL_GPL(unregister_nmi_handler); From nobody Sun May 10 09:54:41 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 15203C433EF for ; Fri, 13 May 2022 11:10:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1379780AbiEMLKO (ORCPT ); Fri, 13 May 2022 07:10:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43216 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1379733AbiEMLKL (ORCPT ); Fri, 13 May 2022 07:10:11 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AF9DA2A5E9D for ; Fri, 13 May 2022 04:10:08 -0700 (PDT) From: Thomas Gleixner DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1652440207; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=D8KLHqVQtCIkBNwToZ5fdFqQrvPlxGtB7EqGxkiaVLg=; b=C3YN/lpNIatUZ1jSSajMzf8gAKJjdczxnx1vlBtduen8gZaEhL29kev4HdGFUyDDvmj2SQ OPPtRtr5bGyQQhMdvgQSYFoniI5Lvka974v9oieFM1oeVonF8VAT/KsZcJoVWltQy+E3qR vxx8khx/ECbgI+gtIJPilEx2lPigvSjqklAiKc6Crj0ywjx0GN3l0AyMx5LG5b5/8w9CeT HrBkK6m45vUChJ1JeBEMEmOvd4AQ4Y8jlJ0pS7CQ41NB6ozQUkYATZ252ShRuckKokICIn TY4zP1UjhDrU7qyxDg48KFFyo9S1NsZhj3IzBFNkS3Pj50OhQXltGQRx8SxFAw== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1652440207; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=D8KLHqVQtCIkBNwToZ5fdFqQrvPlxGtB7EqGxkiaVLg=; b=8/Q5H2F71NRtRscbCKIedtWvnXfCIPftVKtCvXBUsGHfV2qlGD9eajiPVRiVlkskqt83v8 8mQ7GrqQHWBjcDAg== To: Sean Christopherson , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org Cc: "H. Peter Anvin" , linux-kernel@vger.kernel.org, "Guilherme G . Piccoli" , Vitaly Kuznetsov , Paolo Bonzini , Sean Christopherson Subject: [PATCH] x86/nmi: Make register_nmi_handler() more robust In-Reply-To: <20220511234332.3654455-1-seanjc@google.com> References: <20220511234332.3654455-1-seanjc@google.com> Date: Fri, 13 May 2022 13:10:06 +0200 Message-ID: <87zgjlsn75.ffs@tglx> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" register_nmi_handler() has no sanity check whether a handler has been registered already. Such an unintended double-add leads to list corruption and hard to diagnose problems during the next NMI handling. Init the list head in the static nmi action struct and check it for being empty in register_nmi_handler(). Reported-by: Sean Christopherson Signed-off-by: Thomas Gleixner --- arch/x86/include/asm/nmi.h | 1 + arch/x86/kernel/nmi.c | 10 +++++++--- 2 files changed, 8 insertions(+), 3 deletions(-) --- a/arch/x86/include/asm/nmi.h +++ b/arch/x86/include/asm/nmi.h @@ -47,6 +47,7 @@ struct nmiaction { #define register_nmi_handler(t, fn, fg, n, init...) \ ({ \ static struct nmiaction init fn##_na =3D { \ + .list =3D LIST_HEAD_INIT(fn##_na.list), \ .handler =3D (fn), \ .name =3D (n), \ .flags =3D (fg), \ --- a/arch/x86/kernel/nmi.c +++ b/arch/x86/kernel/nmi.c @@ -157,7 +157,7 @@ int __register_nmi_handler(unsigned int struct nmi_desc *desc =3D nmi_to_desc(type); unsigned long flags; =20 - if (!action->handler) + if (WARN_ON_ONCE(action->handler || !list_empty(&action->list))) return -EINVAL; =20 raw_spin_lock_irqsave(&desc->lock, flags); @@ -186,7 +186,7 @@ EXPORT_SYMBOL(__register_nmi_handler); void unregister_nmi_handler(unsigned int type, const char *name) { struct nmi_desc *desc =3D nmi_to_desc(type); - struct nmiaction *n; + struct nmiaction *n, *found =3D NULL; unsigned long flags; =20 raw_spin_lock_irqsave(&desc->lock, flags); @@ -200,12 +200,16 @@ void unregister_nmi_handler(unsigned int WARN(in_nmi(), "Trying to free NMI (%s) from NMI context!\n", n->name); list_del_rcu(&n->list); + found =3D n; break; } } =20 raw_spin_unlock_irqrestore(&desc->lock, flags); - synchronize_rcu(); + if (found) { + synchronize_rcu(); + INIT_LIST_HEAD(found); + } } EXPORT_SYMBOL_GPL(unregister_nmi_handler);