arch/x86/kvm/x86.c | 2 ++ 1 file changed, 2 insertions(+)
Add SRCU read-side protection to KVM_GET_SREGS2 ioctl handler.
__get_sregs2() may read guest memory when caching PDPTR registers:
__get_sregs2() -> kvm_pdptr_read() -> svm_cache_reg() -> load_pdptrs()
-> kvm_vcpu_read_guest_page() -> kvm_vcpu_gfn_to_memslot()
kvm_vcpu_gfn_to_memslot() dereferences memslots via __kvm_memslots(),
which uses srcu_dereference_check() and requires either kvm->srcu or
kvm->slots_lock to be held. Currently only vcpu->mutex is held,
triggering lockdep warning:
=============================
WARNING: suspicious RCU usage in kvm_vcpu_gfn_to_memslot
6.12.59+ #3 Not tainted
-----------------------------
include/linux/kvm_host.h:1062 suspicious rcu_dereference_check() usage!
other info that might help us debug this:
rcu_scheduler_active = 2, debug_locks = 1
1 lock held by syz.5.1717/15100:
#0: ff1100002f4b00b0 (&vcpu->mutex){+.+.}-{3:3}, at: kvm_vcpu_ioctl+0x1d5/0x1590
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:94 [inline]
dump_stack_lvl+0xf0/0x120 lib/dump_stack.c:120
lockdep_rcu_suspicious+0x1e3/0x270 kernel/locking/lockdep.c:6824
__kvm_memslots include/linux/kvm_host.h:1062 [inline]
__kvm_memslots include/linux/kvm_host.h:1059 [inline]
kvm_vcpu_memslots include/linux/kvm_host.h:1076 [inline]
kvm_vcpu_gfn_to_memslot+0x518/0x5e0 virt/kvm/kvm_main.c:2617
kvm_vcpu_read_guest_page+0x27/0x50 virt/kvm/kvm_main.c:3302
load_pdptrs+0xff/0x4b0 arch/x86/kvm/x86.c:1065
svm_cache_reg+0x1c9/0x230 arch/x86/kvm/svm/svm.c:1688
kvm_pdptr_read arch/x86/kvm/kvm_cache_regs.h:141 [inline]
__get_sregs2 arch/x86/kvm/x86.c:11784 [inline]
kvm_arch_vcpu_ioctl+0x3e20/0x4aa0 arch/x86/kvm/x86.c:6279
kvm_vcpu_ioctl+0x856/0x1590 virt/kvm/kvm_main.c:4663
vfs_ioctl fs/ioctl.c:51 [inline]
__do_sys_ioctl fs/ioctl.c:907 [inline]
__se_sys_ioctl fs/ioctl.c:893 [inline]
__x64_sys_ioctl+0x18b/0x210 fs/ioctl.c:893
do_syscall_x64 arch/x86/entry/common.c:52 [inline]
do_syscall_64+0xbd/0x1d0 arch/x86/entry/common.c:83
entry_SYSCALL_64_after_hwframe+0x77/0x7f
Found by Linux Verification Center (linuxtesting.org) with Syzkaller.
Cc: stable@vger.kernel.org
Fixes: 6dba94035203 ("KVM: x86: Introduce KVM_GET_SREGS2 / KVM_SET_SREGS2")
Signed-off-by: Vasiliy Kovalev <kovalev@altlinux.org>
---
Note 1: commit 85e5ba83c016 ("KVM: x86: Do all post-set CPUID processing
during vCPU creation") in v6.14+ reduces the likelihood of hitting this
path by ensuring proper MMU initialization, but does not eliminate the
requirement for SRCU protection when accessing guest memory.
Note 2: KVM_SET_SREGS2 is not modified because __set_sregs_common()
already acquires SRCU when update_pdptrs=true, which covers the case
when PDPTRs must be loaded from guest memory.
---
arch/x86/kvm/x86.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 8acfdfc583a1..73c900c72f31 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -6619,7 +6619,9 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
r = -ENOMEM;
if (!u.sregs2)
goto out;
+ kvm_vcpu_srcu_read_lock(vcpu);
__get_sregs2(vcpu, u.sregs2);
+ kvm_vcpu_srcu_read_unlock(vcpu);
r = -EFAULT;
if (copy_to_user(argp, u.sregs2, sizeof(struct kvm_sregs2)))
goto out;
--
2.50.1
On Fri, Jan 16, 2026, Vasiliy Kovalev wrote:
> ---
> Note 1: commit 85e5ba83c016 ("KVM: x86: Do all post-set CPUID processing
> during vCPU creation") in v6.14+ reduces the likelihood of hitting this
> path by ensuring proper MMU initialization, but does not eliminate the
> requirement for SRCU protection when accessing guest memory.
>
> Note 2: KVM_SET_SREGS2 is not modified because __set_sregs_common()
> already acquires SRCU when update_pdptrs=true, which covers the case
> when PDPTRs must be loaded from guest memory.
On the topic of the update_pdptrs behavior, what if we scope the fix to precisely
reading the PDPTRs? Not for performance reasons, but for documentation purposes,
e.g. so that future readers don't look at __get_sregs() and wonder why that call
isn't wrapped with SRCU protection.
I.e.
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index e013392fe20c..a5a65dde89c0 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -12145,9 +12145,11 @@ static void __get_sregs2(struct kvm_vcpu *vcpu, struct kvm_sregs2 *sregs2)
return;
if (is_pae_paging(vcpu)) {
+ kvm_vcpu_srcu_read_lock(vcpu);
for (i = 0 ; i < 4 ; i++)
sregs2->pdptrs[i] = kvm_pdptr_read(vcpu, i);
sregs2->flags |= KVM_SREGS2_FLAGS_PDPTRS_VALID;
+ kvm_vcpu_srcu_read_unlock(vcpu);
}
}
> ---
> arch/x86/kvm/x86.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 8acfdfc583a1..73c900c72f31 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -6619,7 +6619,9 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
> r = -ENOMEM;
> if (!u.sregs2)
> goto out;
> + kvm_vcpu_srcu_read_lock(vcpu);
> __get_sregs2(vcpu, u.sregs2);
> + kvm_vcpu_srcu_read_unlock(vcpu);
> r = -EFAULT;
> if (copy_to_user(argp, u.sregs2, sizeof(struct kvm_sregs2)))
> goto out;
> --
> 2.50.1
>
On 1/23/26 20:24, Sean Christopherson wrote:
> On Fri, Jan 16, 2026, Vasiliy Kovalev wrote:
>> ---
>> Note 1: commit 85e5ba83c016 ("KVM: x86: Do all post-set CPUID processing
>> during vCPU creation") in v6.14+ reduces the likelihood of hitting this
>> path by ensuring proper MMU initialization, but does not eliminate the
>> requirement for SRCU protection when accessing guest memory.
>>
>> Note 2: KVM_SET_SREGS2 is not modified because __set_sregs_common()
>> already acquires SRCU when update_pdptrs=true, which covers the case
>> when PDPTRs must be loaded from guest memory.
>
> On the topic of the update_pdptrs behavior, what if we scope the fix to precisely
> reading the PDPTRs? Not for performance reasons, but for documentation purposes,
> e.g. so that future readers don't look at __get_sregs() and wonder why that call
> isn't wrapped with SRCU protection.
Agreed, moving the lock inside __get_sregs2() makes the requirements
clearer. I've verified it fixes the issue and sent v2:
https://lore.kernel.org/all/20260123222801.646123-1-kovalev@altlinux.org/
--
Thanks,
Vasiliy
© 2016 - 2026 Red Hat, Inc.