Leave KVM's user-return notifier registered in the unlikely case that the
notifier is registered when disabling virtualization via IPI callback in
response to reboot/shutdown. On reboot/shutdown, keeping the notifier
registered is ok as far as MSR state is concerned (arguably better then
restoring MSRs at an unknown point in time), as the callback will run
cleanly and restore host MSRs if the CPU manages to return to userspace
before the system goes down.
The only wrinkle is that if kvm.ko module unload manages to race with
reboot/shutdown, then leaving the notifier registered could lead to
use-after-free due to calling into unloaded kvm.ko module code. But such
a race is only possible on --forced reboot/shutdown, because otherwise
userspace tasks would be frozen before kvm_shutdown() is called, i.e. on a
"normal" reboot/shutdown, it should be impossible for the CPU to return to
userspace after kvm_shutdown().
Furthermore, on a --forced reboot/shutdown, unregistering the user-return
hook from IRQ context doesn't fully guard against use-after-free, because
KVM could immediately re-register the hook, e.g. if the IRQ arrives before
kvm_user_return_register_notifier() is called.
Rather than trying to guard against the IPI in the "normal" user-return
code, which is difficult and noisy, simply leave the user-return notifier
registered on a reboot, and bump the kvm.ko module refcount to defend
against a use-after-free due to kvm.ko unload racing against reboot.
Alternatively, KVM could allow kvm.ko and try to drop the notifiers during
kvm_x86_exit(), but that's also a can of worms as registration is per-CPU,
and so KVM would need to blast an IPI, and doing so while a reboot/shutdown
is in-progress is far risky than preventing userspace from unloading KVM.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/x86.c | 21 ++++++++++++++++++++-
1 file changed, 20 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index b4b5d2d09634..386dc2401f58 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -13078,7 +13078,21 @@ int kvm_arch_enable_virtualization_cpu(void)
void kvm_arch_disable_virtualization_cpu(void)
{
kvm_x86_call(disable_virtualization_cpu)();
- drop_user_return_notifiers();
+
+ /*
+ * Leave the user-return notifiers as-is when disabling virtualization
+ * for reboot, i.e. when disabling via IPI function call, and instead
+ * pin kvm.ko (if it's a module) to defend against use-after-free (in
+ * the *very* unlikely scenario module unload is racing with reboot).
+ * On a forced reboot, tasks aren't frozen before shutdown, and so KVM
+ * could be actively modifying user-return MSR state when the IPI to
+ * disable virtualization arrives. Handle the extreme edge case here
+ * instead of trying to account for it in the normal flows.
+ */
+ if (in_task() || WARN_ON_ONCE(!kvm_rebooting))
+ drop_user_return_notifiers();
+ else
+ __module_get(THIS_MODULE);
}
bool kvm_vcpu_is_reset_bsp(struct kvm_vcpu *vcpu)
@@ -14363,6 +14377,11 @@ module_init(kvm_x86_init);
static void __exit kvm_x86_exit(void)
{
+ int cpu;
+
+ for_each_possible_cpu(cpu)
+ WARN_ON_ONCE(per_cpu_ptr(user_return_msrs, cpu)->registered);
+
WARN_ON_ONCE(static_branch_unlikely(&kvm_has_noapic_vcpu));
}
module_exit(kvm_x86_exit);
--
2.51.0.858.gf9c4a03a3a-goog
> bool kvm_vcpu_is_reset_bsp(struct kvm_vcpu *vcpu)
>@@ -14363,6 +14377,11 @@ module_init(kvm_x86_init);
>
> static void __exit kvm_x86_exit(void)
> {
>+ int cpu;
>+
>+ for_each_possible_cpu(cpu)
>+ WARN_ON_ONCE(per_cpu_ptr(user_return_msrs, cpu)->registered);
Is it OK to reference user_return_msrs during kvm.ko unloading? IIUC,
user_return_msrs has already been freed during kvm-{intel,amd}.ko unloading.
See:
vmx_exit/svm_exit()
-> kvm_x86_vendor_exit()
-> free_percpu(user_return_msrs);
>+
>t WARN_ON_ONCE(static_branch_unlikely(&kvm_has_noapic_vcpu));
> }
> module_exit(kvm_x86_exit);
>--
>2.51.0.858.gf9c4a03a3a-goog
>
>
On Fri, Oct 17, 2025, Chao Gao wrote:
> > bool kvm_vcpu_is_reset_bsp(struct kvm_vcpu *vcpu)
> >@@ -14363,6 +14377,11 @@ module_init(kvm_x86_init);
> >
> > static void __exit kvm_x86_exit(void)
> > {
> >+ int cpu;
> >+
> >+ for_each_possible_cpu(cpu)
> >+ WARN_ON_ONCE(per_cpu_ptr(user_return_msrs, cpu)->registered);
>
> Is it OK to reference user_return_msrs during kvm.ko unloading? IIUC,
> user_return_msrs has already been freed during kvm-{intel,amd}.ko unloading.
> See:
>
> vmx_exit/svm_exit()
> -> kvm_x86_vendor_exit()
> -> free_percpu(user_return_msrs);
Ouch. Guess who didn't run with KASAN...
And rather than squeezing the WARN into this patch, I'm strongly leaning toward
adding it in a prep patch, as the WARN is valuable irrespective of how KVM handles
reboot.
Not yet tested...
--
From: Sean Christopherson <seanjc@google.com>
Date: Fri, 17 Oct 2025 06:10:30 -0700
Subject: [PATCH 2/5] KVM: x86: WARN if user-return MSR notifier is registered
on exit
When freeing the per-CPU user-return MSRs structures, WARN if any CPU has
a registered notifier to help detect and/or debug potential use-after-free
issues. The lifecycle of the notifiers is rather convoluted, and has
several non-obvious paths where notifiers are unregistered, i.e. isn't
exactly the most robust code possible.
The notifiers they are registered on-demand in KVM, on the first WRMSR to
a tracked register. _Usually_ the notifier is unregistered whenever the
CPU returns to userspace. But because any given CPU isn't guaranteed to
return to userspace, e.g. the CPU could be offlined before doing so, KVM
also "drops", a.k.a. unregisters, the notifiers when virtualization is
disabled on the CPU.
Further complicating the unregister path is the fact that the calls to
disable virtualization come from common KVM, and the per-CPU calls are
guarded by a per-CPU flag (to harden _that_ code against bugs, e.g. due to
mishandling reboot). Reboot/shutdown in particular is problematic, as KVM
disables virtualization via IPI function call, i.e. from IRQ context,
instead of using the cpuhp framework, which runs in task context. I.e. on
reboot/shutdown, drop_user_return_notifiers() is called asynchronously.
Forced reboot/shutdown is the most problematic scenario, as userspace tasks
are not frozen before kvm_shutdown() is invoked, i.e. KVM could be actively
manipulating the user-return MSR lists and/or notifiers when the IPI
arrives. To a certain extent, all bets are off when userspace forces a
reboot/shutdown, but KVM should at least avoid a use-after-free, e.g. to
avoid crashing the kernel when trying to reboot.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/x86.c | 33 +++++++++++++++++++++++++--------
1 file changed, 25 insertions(+), 8 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index b4b5d2d09634..334a911b36c5 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -575,6 +575,27 @@ static inline void kvm_async_pf_hash_reset(struct kvm_vcpu *vcpu)
vcpu->arch.apf.gfns[i] = ~0;
}
+static int kvm_init_user_return_msrs(void)
+{
+ user_return_msrs = alloc_percpu(struct kvm_user_return_msrs);
+ if (!user_return_msrs) {
+ pr_err("failed to allocate percpu user_return_msrs\n");
+ return -ENOMEM;
+ }
+ kvm_nr_uret_msrs = 0;
+ return 0;
+}
+
+static void kvm_free_user_return_msrs(void)
+{
+ int cpu;
+
+ for_each_possible_cpu(cpu)
+ WARN_ON_ONCE(per_cpu_ptr(user_return_msrs, cpu)->registered);
+
+ free_percpu(user_return_msrs);
+}
+
static void kvm_on_user_return(struct user_return_notifier *urn)
{
unsigned slot;
@@ -10032,13 +10053,9 @@ int kvm_x86_vendor_init(struct kvm_x86_init_ops *ops)
return -ENOMEM;
}
- user_return_msrs = alloc_percpu(struct kvm_user_return_msrs);
- if (!user_return_msrs) {
- pr_err("failed to allocate percpu kvm_user_return_msrs\n");
- r = -ENOMEM;
+ r = kvm_init_user_return_msrs();
+ if (r)
goto out_free_x86_emulator_cache;
- }
- kvm_nr_uret_msrs = 0;
r = kvm_mmu_vendor_module_init();
if (r)
@@ -10141,7 +10158,7 @@ int kvm_x86_vendor_init(struct kvm_x86_init_ops *ops)
out_mmu_exit:
kvm_mmu_vendor_module_exit();
out_free_percpu:
- free_percpu(user_return_msrs);
+ kvm_free_user_return_msrs();
out_free_x86_emulator_cache:
kmem_cache_destroy(x86_emulator_cache);
return r;
@@ -10170,7 +10187,7 @@ void kvm_x86_vendor_exit(void)
#endif
kvm_x86_call(hardware_unsetup)();
kvm_mmu_vendor_module_exit();
- free_percpu(user_return_msrs);
+ kvm_free_user_return_msrs();
kmem_cache_destroy(x86_emulator_cache);
#ifdef CONFIG_KVM_XEN
static_key_deferred_flush(&kvm_xen_enabled);
--
© 2016 - 2026 Red Hat, Inc.