From nobody Sun Feb 8 01:52:06 2026 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 45C3E2BD93D for ; Wed, 30 Apr 2025 20:23:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746044623; cv=none; b=ronVvB9NDs+zeP7/3FoXqwslaocYXgVmsWzOuUpobpf1n5BI39oWfVSqKAQqyth++W4N+WI/SA7+yU/luIysQxGgLeeYGn3wTg6FltmosTmvDUrMYQ+aXqWnNiC0W3wMAKv+6TyBXICFuqdGHo3lKndFz1iqt77SJZpZhQgUhdk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746044623; c=relaxed/simple; bh=8LOVouw/uXiAyDHqo28yXBTFK41O/HPRr3Q48Y2c4NY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=bLI4i+Syc/5CJ7+uIvY9qDZCSMI0RSpcXq+1frSJlguEZ8TiuLvJQ0eAnCtLHd6jAnKLL3trtlxQM6xJqdIiFSdGSNMfkMruwN16n6N7cQ3Z7SATLLM+jKABzUsq/rJsq81Jm4ktIY3DQ+0u80DnngRFF0F+Bsp80i1t8oZqxMY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=afG2VvB9; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="afG2VvB9" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1746044620; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=eSDMr6CrswXgqByQYff1QxuCHlV9bXTUySTX4ETsg9o=; b=afG2VvB91s/kv09KKJ4AsRRZZpxSMgs4gkA7I7XS6L4FQn6num3ceXGR3/fYR/x2tAvUcx yuhJqk2JGv7OriiHHdkiItpqf+tG5e9tUns8wuC5UDAMZWWhQ+ADgozN/TSFq4YasGHNXL fLNzDk0ZWTadv0jqzvAEnNQHyyii9NY= Received: from mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-311-Ajtwh_PZMfWmHP00FkIbkg-1; Wed, 30 Apr 2025 16:23:35 -0400 X-MC-Unique: Ajtwh_PZMfWmHP00FkIbkg-1 X-Mimecast-MFC-AGG-ID: Ajtwh_PZMfWmHP00FkIbkg_1746044611 Received: from mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.93]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id C079F18001D5; Wed, 30 Apr 2025 20:23:29 +0000 (UTC) Received: from intellaptop.lan (unknown [10.22.80.5]) by mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 9BF0218001EF; Wed, 30 Apr 2025 20:23:22 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: "H. Peter Anvin" , x86@kernel.org, Maxim Levitsky , Randy Dunlap , Paolo Bonzini , Will Deacon , Oliver Upton , Kunkun Jiang , Jing Zhang , Albert Ou , Keisuke Nishimura , Anup Patel , Catalin Marinas , Atish Patra , kvmarm@lists.linux.dev, Waiman Long , Boqun Feng , linux-arm-kernel@lists.infradead.org, Peter Zijlstra , Dave Hansen , Paul Walmsley , Suzuki K Poulose , Zenghui Yu , Sebastian Ott , Andre Przywara , Ingo Molnar , Alexandre Ghiti , Bjorn Helgaas , Palmer Dabbelt , Joey Gouly , Borislav Petkov , Sean Christopherson , Marc Zyngier , Alexander Potapenko , Thomas Gleixner , linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, Shusen Li , kvm-riscv@lists.infradead.org Subject: [PATCH v3 1/4] arm64: KVM: use mutex_trylock_nest_lock when locking all vCPUs Date: Wed, 30 Apr 2025 16:23:08 -0400 Message-ID: <20250430202311.364641-2-mlevitsk@redhat.com> In-Reply-To: <20250430202311.364641-1-mlevitsk@redhat.com> References: <20250430202311.364641-1-mlevitsk@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.93 Content-Type: text/plain; charset="utf-8" Use mutex_trylock_nest_lock instead of mutex_trylock when locking all vCPUs of a VM, to avoid triggering a lockdep warning, if the VM is configured to have more than MAX_LOCK_DEPTH vCPUs. This fixes the following false lockdep warning: [ 328.171264] BUG: MAX_LOCK_DEPTH too low! [ 328.175227] turning off the locking correctness validator. [ 328.180726] Please attach the output of /proc/lock_stat to the bug report [ 328.187531] depth: 48 max: 48! [ 328.190678] 48 locks held by qemu-kvm/11664: [ 328.194957] #0: ffff800086de5ba0 (&kvm->lock){+.+.}-{3:3}, at: kvm_ioct= l_create_device+0x174/0x5b0 [ 328.204048] #1: ffff0800e78800b8 (&vcpu->mutex){+.+.}-{3:3}, at: lock_a= ll_vcpus+0x16c/0x2a0 [ 328.212521] #2: ffff07ffeee51e98 (&vcpu->mutex){+.+.}-{3:3}, at: lock_a= ll_vcpus+0x16c/0x2a0 [ 328.220991] #3: ffff0800dc7d80b8 (&vcpu->mutex){+.+.}-{3:3}, at: lock_a= ll_vcpus+0x16c/0x2a0 [ 328.229463] #4: ffff07ffe0c980b8 (&vcpu->mutex){+.+.}-{3:3}, at: lock_a= ll_vcpus+0x16c/0x2a0 [ 328.237934] #5: ffff0800a3883c78 (&vcpu->mutex){+.+.}-{3:3}, at: lock_a= ll_vcpus+0x16c/0x2a0 [ 328.246405] #6: ffff07fffbe480b8 (&vcpu->mutex){+.+.}-{3:3}, at: lock_a= ll_vcpus+0x16c/0x2a0 Since the locking of all vCPUs is a primitive that can be useful in other architectures that are supported by KVM, also move the code to kvm_main.c Suggested-by: Paolo Bonzini Signed-off-by: Maxim Levitsky --- arch/arm64/include/asm/kvm_host.h | 3 -- arch/arm64/kvm/arch_timer.c | 4 +-- arch/arm64/kvm/arm.c | 43 --------------------------- arch/arm64/kvm/vgic/vgic-init.c | 4 +-- arch/arm64/kvm/vgic/vgic-its.c | 8 ++--- arch/arm64/kvm/vgic/vgic-kvm-device.c | 12 ++++---- include/linux/kvm_host.h | 3 ++ virt/kvm/kvm_main.c | 34 +++++++++++++++++++++ 8 files changed, 51 insertions(+), 60 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm= _host.h index e98cfe7855a6..96ce0b01a61e 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -1263,9 +1263,6 @@ int __init populate_sysreg_config(const struct sys_re= g_desc *sr, unsigned int idx); int __init populate_nv_trap_config(void); =20 -bool lock_all_vcpus(struct kvm *kvm); -void unlock_all_vcpus(struct kvm *kvm); - void kvm_calculate_traps(struct kvm_vcpu *vcpu); =20 /* MMIO helpers */ diff --git a/arch/arm64/kvm/arch_timer.c b/arch/arm64/kvm/arch_timer.c index 5133dcbfe9f7..fdbc8beec930 100644 --- a/arch/arm64/kvm/arch_timer.c +++ b/arch/arm64/kvm/arch_timer.c @@ -1766,7 +1766,7 @@ int kvm_vm_ioctl_set_counter_offset(struct kvm *kvm, =20 mutex_lock(&kvm->lock); =20 - if (lock_all_vcpus(kvm)) { + if (!kvm_trylock_all_vcpus(kvm)) { set_bit(KVM_ARCH_FLAG_VM_COUNTER_OFFSET, &kvm->arch.flags); =20 /* @@ -1778,7 +1778,7 @@ int kvm_vm_ioctl_set_counter_offset(struct kvm *kvm, kvm->arch.timer_data.voffset =3D offset->counter_offset; kvm->arch.timer_data.poffset =3D offset->counter_offset; =20 - unlock_all_vcpus(kvm); + kvm_unlock_all_vcpus(kvm); } else { ret =3D -EBUSY; } diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 68fec8c95fee..d31f42a71bdc 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -1914,49 +1914,6 @@ int kvm_arch_vm_ioctl(struct file *filp, unsigned in= t ioctl, unsigned long arg) } } =20 -/* unlocks vcpus from @vcpu_lock_idx and smaller */ -static void unlock_vcpus(struct kvm *kvm, int vcpu_lock_idx) -{ - struct kvm_vcpu *tmp_vcpu; - - for (; vcpu_lock_idx >=3D 0; vcpu_lock_idx--) { - tmp_vcpu =3D kvm_get_vcpu(kvm, vcpu_lock_idx); - mutex_unlock(&tmp_vcpu->mutex); - } -} - -void unlock_all_vcpus(struct kvm *kvm) -{ - lockdep_assert_held(&kvm->lock); - - unlock_vcpus(kvm, atomic_read(&kvm->online_vcpus) - 1); -} - -/* Returns true if all vcpus were locked, false otherwise */ -bool lock_all_vcpus(struct kvm *kvm) -{ - struct kvm_vcpu *tmp_vcpu; - unsigned long c; - - lockdep_assert_held(&kvm->lock); - - /* - * Any time a vcpu is in an ioctl (including running), the - * core KVM code tries to grab the vcpu->mutex. - * - * By grabbing the vcpu->mutex of all VCPUs we ensure that no - * other VCPUs can fiddle with the state while we access it. - */ - kvm_for_each_vcpu(c, tmp_vcpu, kvm) { - if (!mutex_trylock(&tmp_vcpu->mutex)) { - unlock_vcpus(kvm, c - 1); - return false; - } - } - - return true; -} - static unsigned long nvhe_percpu_size(void) { return (unsigned long)CHOOSE_NVHE_SYM(__per_cpu_end) - diff --git a/arch/arm64/kvm/vgic/vgic-init.c b/arch/arm64/kvm/vgic/vgic-ini= t.c index 1f33e71c2a73..6a426d403a6b 100644 --- a/arch/arm64/kvm/vgic/vgic-init.c +++ b/arch/arm64/kvm/vgic/vgic-init.c @@ -88,7 +88,7 @@ int kvm_vgic_create(struct kvm *kvm, u32 type) lockdep_assert_held(&kvm->lock); =20 ret =3D -EBUSY; - if (!lock_all_vcpus(kvm)) + if (kvm_trylock_all_vcpus(kvm)) return ret; =20 mutex_lock(&kvm->arch.config_lock); @@ -142,7 +142,7 @@ int kvm_vgic_create(struct kvm *kvm, u32 type) =20 out_unlock: mutex_unlock(&kvm->arch.config_lock); - unlock_all_vcpus(kvm); + kvm_unlock_all_vcpus(kvm); return ret; } =20 diff --git a/arch/arm64/kvm/vgic/vgic-its.c b/arch/arm64/kvm/vgic/vgic-its.c index fb96802799c6..7454388e3646 100644 --- a/arch/arm64/kvm/vgic/vgic-its.c +++ b/arch/arm64/kvm/vgic/vgic-its.c @@ -1999,7 +1999,7 @@ static int vgic_its_attr_regs_access(struct kvm_devic= e *dev, =20 mutex_lock(&dev->kvm->lock); =20 - if (!lock_all_vcpus(dev->kvm)) { + if (kvm_trylock_all_vcpus(dev->kvm)) { mutex_unlock(&dev->kvm->lock); return -EBUSY; } @@ -2034,7 +2034,7 @@ static int vgic_its_attr_regs_access(struct kvm_devic= e *dev, } out: mutex_unlock(&dev->kvm->arch.config_lock); - unlock_all_vcpus(dev->kvm); + kvm_unlock_all_vcpus(dev->kvm); mutex_unlock(&dev->kvm->lock); return ret; } @@ -2704,7 +2704,7 @@ static int vgic_its_ctrl(struct kvm *kvm, struct vgic= _its *its, u64 attr) =20 mutex_lock(&kvm->lock); =20 - if (!lock_all_vcpus(kvm)) { + if (kvm_trylock_all_vcpus(kvm)) { mutex_unlock(&kvm->lock); return -EBUSY; } @@ -2726,7 +2726,7 @@ static int vgic_its_ctrl(struct kvm *kvm, struct vgic= _its *its, u64 attr) =20 mutex_unlock(&its->its_lock); mutex_unlock(&kvm->arch.config_lock); - unlock_all_vcpus(kvm); + kvm_unlock_all_vcpus(kvm); mutex_unlock(&kvm->lock); return ret; } diff --git a/arch/arm64/kvm/vgic/vgic-kvm-device.c b/arch/arm64/kvm/vgic/vg= ic-kvm-device.c index 359094f68c23..f9ae790163fb 100644 --- a/arch/arm64/kvm/vgic/vgic-kvm-device.c +++ b/arch/arm64/kvm/vgic/vgic-kvm-device.c @@ -268,7 +268,7 @@ static int vgic_set_common_attr(struct kvm_device *dev, return -ENXIO; mutex_lock(&dev->kvm->lock); =20 - if (!lock_all_vcpus(dev->kvm)) { + if (kvm_trylock_all_vcpus(dev->kvm)) { mutex_unlock(&dev->kvm->lock); return -EBUSY; } @@ -276,7 +276,7 @@ static int vgic_set_common_attr(struct kvm_device *dev, mutex_lock(&dev->kvm->arch.config_lock); r =3D vgic_v3_save_pending_tables(dev->kvm); mutex_unlock(&dev->kvm->arch.config_lock); - unlock_all_vcpus(dev->kvm); + kvm_unlock_all_vcpus(dev->kvm); mutex_unlock(&dev->kvm->lock); return r; } @@ -390,7 +390,7 @@ static int vgic_v2_attr_regs_access(struct kvm_device *= dev, =20 mutex_lock(&dev->kvm->lock); =20 - if (!lock_all_vcpus(dev->kvm)) { + if (kvm_trylock_all_vcpus(dev->kvm)) { mutex_unlock(&dev->kvm->lock); return -EBUSY; } @@ -415,7 +415,7 @@ static int vgic_v2_attr_regs_access(struct kvm_device *= dev, =20 out: mutex_unlock(&dev->kvm->arch.config_lock); - unlock_all_vcpus(dev->kvm); + kvm_unlock_all_vcpus(dev->kvm); mutex_unlock(&dev->kvm->lock); =20 if (!ret && !is_write) @@ -554,7 +554,7 @@ static int vgic_v3_attr_regs_access(struct kvm_device *= dev, =20 mutex_lock(&dev->kvm->lock); =20 - if (!lock_all_vcpus(dev->kvm)) { + if (kvm_trylock_all_vcpus(dev->kvm)) { mutex_unlock(&dev->kvm->lock); return -EBUSY; } @@ -611,7 +611,7 @@ static int vgic_v3_attr_regs_access(struct kvm_device *= dev, =20 out: mutex_unlock(&dev->kvm->arch.config_lock); - unlock_all_vcpus(dev->kvm); + kvm_unlock_all_vcpus(dev->kvm); mutex_unlock(&dev->kvm->lock); =20 if (!ret && uaccess && !is_write) { diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 1dedc421b3e3..10d6652c7aa0 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1015,6 +1015,9 @@ static inline struct kvm_vcpu *kvm_get_vcpu_by_id(str= uct kvm *kvm, int id) =20 void kvm_destroy_vcpus(struct kvm *kvm); =20 +int kvm_trylock_all_vcpus(struct kvm *kvm); +void kvm_unlock_all_vcpus(struct kvm *kvm); + void vcpu_load(struct kvm_vcpu *vcpu); void vcpu_put(struct kvm_vcpu *vcpu); =20 diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 69782df3617f..834f08dfa24c 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1368,6 +1368,40 @@ static int kvm_vm_release(struct inode *inode, struc= t file *filp) return 0; } =20 +/* + * Try to lock all of the VM's vCPUs. + * Assumes that the kvm->lock is held. + */ +int kvm_trylock_all_vcpus(struct kvm *kvm) +{ + struct kvm_vcpu *vcpu; + unsigned long i, j; + + kvm_for_each_vcpu(i, vcpu, kvm) + if (!mutex_trylock_nest_lock(&vcpu->mutex, &kvm->lock)) + goto out_unlock; + return 0; + +out_unlock: + kvm_for_each_vcpu(j, vcpu, kvm) { + if (i =3D=3D j) + break; + mutex_unlock(&vcpu->mutex); + } + return -EINTR; +} +EXPORT_SYMBOL_GPL(kvm_trylock_all_vcpus); + +void kvm_unlock_all_vcpus(struct kvm *kvm) +{ + struct kvm_vcpu *vcpu; + unsigned long i; + + kvm_for_each_vcpu(i, vcpu, kvm) + mutex_unlock(&vcpu->mutex); +} +EXPORT_SYMBOL_GPL(kvm_unlock_all_vcpus); + /* * Allocation size is twice as large as the actual dirty bitmap size. * See kvm_vm_ioctl_get_dirty_log() why this is needed. --=20 2.46.0 From nobody Sun Feb 8 01:52:06 2026 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A75EF2C2593 for ; Wed, 30 Apr 2025 20:23:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746044630; cv=none; b=qzZR5DgnL1NhWopCiy6cWag55bR2fuWFgQpHR8P9e3qvWt03iVPErwT16SjcSKSqavENyK3aWSxzHHEu57OVypFVO0Yx9Ni2MO0MPz/wyEXdMW0NGJaPUfjAOiHGl/dyKXKdiJziNkhG635g8DN7E/qW6JEHNGeV/qVgWPrWEI8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746044630; c=relaxed/simple; bh=cubt+u761kYfcrYBnlZ4wLGM2xQWNDlyKLPgCM32FkQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=mRXhpcR24qWbIOX2w7pYpSBgGUf/TNZeggpovHChG0qITRuj4UYV8Ea1aiji9BQR/bnDwK4FFEPuxXoo7kymVFe/PZRQwQcjDQXvdsXguKyfQlZmghVklHcOWsJpuej7pelKUt9ZEUH2dmeSlBxxd7kceMreh55rnW5GiU/awy8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=LUZdcCa9; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="LUZdcCa9" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1746044627; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=BS8KXzrmvGXfGPLTK0QPoSu9Lo1T2ypTVK/CISvD5is=; b=LUZdcCa9HYW+PBbaWtodVqHHUhxkPLLpOt3K7yGjGfHF8de9l4hFOKdDAC3Ort9VSPQIIJ v+Qs1vQgACZFxvZhNofPIPsaY5ZNXu911hq+6HOhFmsOHB+IU3RDSGZMcD4JbCHYd57B/G XvYhB+tn43U4ZO/xz8qGgDbAISztEII= Received: from mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-256-KSV1yo6aOLK_dziPEOyoDQ-1; Wed, 30 Apr 2025 16:23:42 -0400 X-MC-Unique: KSV1yo6aOLK_dziPEOyoDQ-1 X-Mimecast-MFC-AGG-ID: KSV1yo6aOLK_dziPEOyoDQ_1746044617 Received: from mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.93]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 65AB71800368; Wed, 30 Apr 2025 20:23:36 +0000 (UTC) Received: from intellaptop.lan (unknown [10.22.80.5]) by mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 10DA51800878; Wed, 30 Apr 2025 20:23:29 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: "H. Peter Anvin" , x86@kernel.org, Maxim Levitsky , Randy Dunlap , Paolo Bonzini , Will Deacon , Oliver Upton , Kunkun Jiang , Jing Zhang , Albert Ou , Keisuke Nishimura , Anup Patel , Catalin Marinas , Atish Patra , kvmarm@lists.linux.dev, Waiman Long , Boqun Feng , linux-arm-kernel@lists.infradead.org, Peter Zijlstra , Dave Hansen , Paul Walmsley , Suzuki K Poulose , Zenghui Yu , Sebastian Ott , Andre Przywara , Ingo Molnar , Alexandre Ghiti , Bjorn Helgaas , Palmer Dabbelt , Joey Gouly , Borislav Petkov , Sean Christopherson , Marc Zyngier , Alexander Potapenko , Thomas Gleixner , linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, Shusen Li , kvm-riscv@lists.infradead.org Subject: [PATCH v3 2/4] RISC-V: KVM: switch to kvm_lock/unlock_all_vcpus Date: Wed, 30 Apr 2025 16:23:09 -0400 Message-ID: <20250430202311.364641-3-mlevitsk@redhat.com> In-Reply-To: <20250430202311.364641-1-mlevitsk@redhat.com> References: <20250430202311.364641-1-mlevitsk@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.93 Content-Type: text/plain; charset="utf-8" Use the kvm_trylock_all_vcpus()/unlock_all_vcpus() instead of riscv's own implementation, to avoid triggering a lockdep warning, if the VM is configured to have more than MAX_LOCK_DEPTH vCPUs. Compile tested only. Suggested-by: Paolo Bonzini Signed-off-by: Maxim Levitsky --- arch/riscv/kvm/aia_device.c | 34 ++-------------------------------- 1 file changed, 2 insertions(+), 32 deletions(-) diff --git a/arch/riscv/kvm/aia_device.c b/arch/riscv/kvm/aia_device.c index 39cd26af5a69..6315821f0d69 100644 --- a/arch/riscv/kvm/aia_device.c +++ b/arch/riscv/kvm/aia_device.c @@ -12,36 +12,6 @@ #include #include =20 -static void unlock_vcpus(struct kvm *kvm, int vcpu_lock_idx) -{ - struct kvm_vcpu *tmp_vcpu; - - for (; vcpu_lock_idx >=3D 0; vcpu_lock_idx--) { - tmp_vcpu =3D kvm_get_vcpu(kvm, vcpu_lock_idx); - mutex_unlock(&tmp_vcpu->mutex); - } -} - -static void unlock_all_vcpus(struct kvm *kvm) -{ - unlock_vcpus(kvm, atomic_read(&kvm->online_vcpus) - 1); -} - -static bool lock_all_vcpus(struct kvm *kvm) -{ - struct kvm_vcpu *tmp_vcpu; - unsigned long c; - - kvm_for_each_vcpu(c, tmp_vcpu, kvm) { - if (!mutex_trylock(&tmp_vcpu->mutex)) { - unlock_vcpus(kvm, c - 1); - return false; - } - } - - return true; -} - static int aia_create(struct kvm_device *dev, u32 type) { int ret; @@ -53,7 +23,7 @@ static int aia_create(struct kvm_device *dev, u32 type) return -EEXIST; =20 ret =3D -EBUSY; - if (!lock_all_vcpus(kvm)) + if (kvm_trylock_all_vcpus(kvm)) return ret; =20 kvm_for_each_vcpu(i, vcpu, kvm) { @@ -65,7 +35,7 @@ static int aia_create(struct kvm_device *dev, u32 type) kvm->arch.aia.in_kernel =3D true; =20 out_unlock: - unlock_all_vcpus(kvm); + kvm_unlock_all_vcpus(kvm); return ret; } =20 --=20 2.46.0 From nobody Sun Feb 8 01:52:06 2026 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BB8692C2593 for ; Wed, 30 Apr 2025 20:23:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746044635; cv=none; b=ufZO3Hx3alwW2W4oZPM0O8iFiRIZdHSndDLl1yUbL3XNZnGxHdUerKQJiN4568eY91S8Tmcr5maPLUdWdljWQFbz1WG3CsxIW6suEavitzVh8F0l9j0oJMKYKI4tbLv64q5dvTlbyLKYlY1q9EbgqnWcBCAS6U6FUIJX4q2zXy4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746044635; c=relaxed/simple; bh=FYSieWUXv96exfZG1ShbZz/LgMQ+Pxv/qsWOZ2g+uo4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=l05uXYmpxtXyEQIeu2MkW0ny4iMNXfWr/nnPPXLif+/7rFAyLGbipcMhLhz2uFB0BuTQmeBMI0AH4A3DnkgmWVyT45zjGo14iu8TuXdwduqqWSeY1MPKUqsLvSfGrV5zRpTecqfNkiM4DzyH68+xLTvDRCZ4v6AJ1cuq3cHdGNM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=CSHlnece; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="CSHlnece" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1746044632; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=I7SnvbxN813odoTR96udwnhW9eUh5grZic2/nYbCNXE=; b=CSHlnece4vI7xLlyOfbgM7A688yWilDlSCxuzuFAh49u+W4rp/ohBHTXFSJ+lncHFbVnPX J1EN0gb7CXK0L8d8GAyQaO/HflTMFoTOUdghuZa2qdyN7Qn9nHELu7grwpaI0Hfs3f9T/E 91sWIH7SiaBa42nDUNgEX4b20kgM1d4= Received: from mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-515-AsCnrzbvNoSiIqmGAR377g-1; Wed, 30 Apr 2025 16:23:47 -0400 X-MC-Unique: AsCnrzbvNoSiIqmGAR377g-1 X-Mimecast-MFC-AGG-ID: AsCnrzbvNoSiIqmGAR377g_1746044624 Received: from mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.93]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 215A71956089; Wed, 30 Apr 2025 20:23:43 +0000 (UTC) Received: from intellaptop.lan (unknown [10.22.80.5]) by mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 9EB201800365; Wed, 30 Apr 2025 20:23:36 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: "H. Peter Anvin" , x86@kernel.org, Maxim Levitsky , Randy Dunlap , Paolo Bonzini , Will Deacon , Oliver Upton , Kunkun Jiang , Jing Zhang , Albert Ou , Keisuke Nishimura , Anup Patel , Catalin Marinas , Atish Patra , kvmarm@lists.linux.dev, Waiman Long , Boqun Feng , linux-arm-kernel@lists.infradead.org, Peter Zijlstra , Dave Hansen , Paul Walmsley , Suzuki K Poulose , Zenghui Yu , Sebastian Ott , Andre Przywara , Ingo Molnar , Alexandre Ghiti , Bjorn Helgaas , Palmer Dabbelt , Joey Gouly , Borislav Petkov , Sean Christopherson , Marc Zyngier , Alexander Potapenko , Thomas Gleixner , linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, Shusen Li , kvm-riscv@lists.infradead.org Subject: [PATCH v3 3/4] locking/mutex: implement mutex_lock_killable_nest_lock Date: Wed, 30 Apr 2025 16:23:10 -0400 Message-ID: <20250430202311.364641-4-mlevitsk@redhat.com> In-Reply-To: <20250430202311.364641-1-mlevitsk@redhat.com> References: <20250430202311.364641-1-mlevitsk@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.93 Content-Type: text/plain; charset="utf-8" KVM's SEV intra-host migration code needs to lock all vCPUs of the source and the target VM, before it proceeds with the migration. The number of vCPUs that belong to each VM is not bounded by anything except a self-imposed KVM limit of CONFIG_KVM_MAX_NR_VCPUS vCPUs which is significantly larger than the depth of lockdep's lock stack. Luckily, the locks in both of the cases mentioned above, are held under the 'kvm->lock' of each VM, which means that we can use the little known lockdep feature called a "nest_lock" to support this use case in a cleaner way, compared to the way it's currently done. Implement and expose 'mutex_lock_killable_nest_lock' for this purpose. Signed-off-by: Maxim Levitsky --- include/linux/mutex.h | 17 +++++++++++++---- kernel/locking/mutex.c | 7 ++++--- 2 files changed, 17 insertions(+), 7 deletions(-) diff --git a/include/linux/mutex.h b/include/linux/mutex.h index da4518cfd59c..a039fa8c1780 100644 --- a/include/linux/mutex.h +++ b/include/linux/mutex.h @@ -156,16 +156,15 @@ static inline int __devm_mutex_init(struct device *de= v, struct mutex *lock) #ifdef CONFIG_DEBUG_LOCK_ALLOC extern void mutex_lock_nested(struct mutex *lock, unsigned int subclass); extern void _mutex_lock_nest_lock(struct mutex *lock, struct lockdep_map *= nest_lock); - extern int __must_check mutex_lock_interruptible_nested(struct mutex *lock, unsigned int subclass); -extern int __must_check mutex_lock_killable_nested(struct mutex *lock, - unsigned int subclass); +extern int __must_check _mutex_lock_killable(struct mutex *lock, + unsigned int subclass, struct lockdep_map *nest_lock); extern void mutex_lock_io_nested(struct mutex *lock, unsigned int subclass= ); =20 #define mutex_lock(lock) mutex_lock_nested(lock, 0) #define mutex_lock_interruptible(lock) mutex_lock_interruptible_nested(loc= k, 0) -#define mutex_lock_killable(lock) mutex_lock_killable_nested(lock, 0) +#define mutex_lock_killable(lock) _mutex_lock_killable(lock, 0, NULL) #define mutex_lock_io(lock) mutex_lock_io_nested(lock, 0) =20 #define mutex_lock_nest_lock(lock, nest_lock) \ @@ -174,6 +173,15 @@ do { \ _mutex_lock_nest_lock(lock, &(nest_lock)->dep_map); \ } while (0) =20 +#define mutex_lock_killable_nest_lock(lock, nest_lock) \ +( \ + typecheck(struct lockdep_map *, &(nest_lock)->dep_map), \ + _mutex_lock_killable(lock, 0, &(nest_lock)->dep_map) \ +) + +#define mutex_lock_killable_nested(lock, subclass) \ + _mutex_lock_killable(lock, subclass, NULL) + #else extern void mutex_lock(struct mutex *lock); extern int __must_check mutex_lock_interruptible(struct mutex *lock); @@ -183,6 +191,7 @@ extern void mutex_lock_io(struct mutex *lock); # define mutex_lock_nested(lock, subclass) mutex_lock(lock) # define mutex_lock_interruptible_nested(lock, subclass) mutex_lock_interr= uptible(lock) # define mutex_lock_killable_nested(lock, subclass) mutex_lock_killable(lo= ck) +# define mutex_lock_killable_nest_lock(lock, nest_lock) mutex_lock_killabl= e(lock) # define mutex_lock_nest_lock(lock, nest_lock) mutex_lock(lock) # define mutex_lock_io_nested(lock, subclass) mutex_lock_io(lock) #endif diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index c75a838d3bae..234923121ff0 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -808,11 +808,12 @@ _mutex_lock_nest_lock(struct mutex *lock, struct lock= dep_map *nest) EXPORT_SYMBOL_GPL(_mutex_lock_nest_lock); =20 int __sched -mutex_lock_killable_nested(struct mutex *lock, unsigned int subclass) +_mutex_lock_killable(struct mutex *lock, unsigned int subclass, + struct lockdep_map *nest) { - return __mutex_lock(lock, TASK_KILLABLE, subclass, NULL, _RET_IP_); + return __mutex_lock(lock, TASK_KILLABLE, subclass, nest, _RET_IP_); } -EXPORT_SYMBOL_GPL(mutex_lock_killable_nested); +EXPORT_SYMBOL_GPL(_mutex_lock_killable); =20 int __sched mutex_lock_interruptible_nested(struct mutex *lock, unsigned int subclass) --=20 2.46.0 From nobody Sun Feb 8 01:52:06 2026 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1D68D2D0AAA for ; Wed, 30 Apr 2025 20:23:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746044639; cv=none; b=FeEv1a2bQXigWxUGqnx6emKAgMNyDftbr2WwtoRBjPaQ9AV2s2kXAqfkxN1cx9gM9LqHdcl6RekfxwOrkp9tfLdgx+ybY29Fj1lfEybpqKUevAUr1wzyIDKuUykc5v/tgJ+I35n0erQi4b8bycwYsXjrsmUP8cVIbi0zJHdfGk0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1746044639; c=relaxed/simple; bh=e0k7kAtfuw3ZmlklUuw6SXBXSm2o7IJFdIgicj6fkHc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=cv/xYCa/ltbQNf9OtlTwIRk6zJGqkEzX4HcnbmCPRpvPOnCyIhY8Jx4kgunP+bscrpVeJhqJaOVN1zyRcs863JBnAdu873dlgiZjmjReL8K4fwXdzJitm6Z2TknfDjH+AOX6/Qiv49sm98uoBWjHe9Ah2Wz0VMkGdp1UiZkz0vY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=CsoUnknr; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="CsoUnknr" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1746044637; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=+MZvmRfCtI61m/XdI87EqyqLvZ2yGHt28AUm8pnJCIE=; b=CsoUnknrU6pvybwNN8pVQcpx8WOXNhSQRpE3a+AIqu/qL3k/rCjKCDij+60c9oZlSYwRTX yZBMvXSS4M/8pmOer4gIfAp0JufxQmJ7VLeNudTf9JI0Y5LGWgqmttSy4oBDta/rPqdbY/ KwuJcJ17OR0THUfj0tESHMIrWjBLClc= Received: from mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-587-D2VxUS_sMJ2DiK9mwB2K1A-1; Wed, 30 Apr 2025 16:23:53 -0400 X-MC-Unique: D2VxUS_sMJ2DiK9mwB2K1A-1 X-Mimecast-MFC-AGG-ID: D2VxUS_sMJ2DiK9mwB2K1A_1746044629 Received: from mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.93]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id A93711800570; Wed, 30 Apr 2025 20:23:48 +0000 (UTC) Received: from intellaptop.lan (unknown [10.22.80.5]) by mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 421A71800871; Wed, 30 Apr 2025 20:23:43 +0000 (UTC) From: Maxim Levitsky To: kvm@vger.kernel.org Cc: "H. Peter Anvin" , x86@kernel.org, Maxim Levitsky , Randy Dunlap , Paolo Bonzini , Will Deacon , Oliver Upton , Kunkun Jiang , Jing Zhang , Albert Ou , Keisuke Nishimura , Anup Patel , Catalin Marinas , Atish Patra , kvmarm@lists.linux.dev, Waiman Long , Boqun Feng , linux-arm-kernel@lists.infradead.org, Peter Zijlstra , Dave Hansen , Paul Walmsley , Suzuki K Poulose , Zenghui Yu , Sebastian Ott , Andre Przywara , Ingo Molnar , Alexandre Ghiti , Bjorn Helgaas , Palmer Dabbelt , Joey Gouly , Borislav Petkov , Sean Christopherson , Marc Zyngier , Alexander Potapenko , Thomas Gleixner , linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, Shusen Li , kvm-riscv@lists.infradead.org Subject: [PATCH v3 4/4] x86: KVM: SEV: implement kvm_lock_all_vcpus and use it Date: Wed, 30 Apr 2025 16:23:11 -0400 Message-ID: <20250430202311.364641-5-mlevitsk@redhat.com> In-Reply-To: <20250430202311.364641-1-mlevitsk@redhat.com> References: <20250430202311.364641-1-mlevitsk@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.93 Content-Type: text/plain; charset="utf-8" Implement kvm_lock_all_vcpus() and use it instead of sev own sev_{lock|unlock}_vcpus_for_migration(). Suggested-by: Paolo Bonzini Signed-off-by: Maxim Levitsky --- arch/x86/kvm/svm/sev.c | 72 +++------------------------------------- include/linux/kvm_host.h | 1 + virt/kvm/kvm_main.c | 25 ++++++++++++++ 3 files changed, 30 insertions(+), 68 deletions(-) diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 0bc708ee2788..16db6179013d 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -1882,70 +1882,6 @@ static void sev_unlock_two_vms(struct kvm *dst_kvm, = struct kvm *src_kvm) atomic_set_release(&src_sev->migration_in_progress, 0); } =20 -/* vCPU mutex subclasses. */ -enum sev_migration_role { - SEV_MIGRATION_SOURCE =3D 0, - SEV_MIGRATION_TARGET, - SEV_NR_MIGRATION_ROLES, -}; - -static int sev_lock_vcpus_for_migration(struct kvm *kvm, - enum sev_migration_role role) -{ - struct kvm_vcpu *vcpu; - unsigned long i, j; - - kvm_for_each_vcpu(i, vcpu, kvm) { - if (mutex_lock_killable_nested(&vcpu->mutex, role)) - goto out_unlock; - -#ifdef CONFIG_PROVE_LOCKING - if (!i) - /* - * Reset the role to one that avoids colliding with - * the role used for the first vcpu mutex. - */ - role =3D SEV_NR_MIGRATION_ROLES; - else - mutex_release(&vcpu->mutex.dep_map, _THIS_IP_); -#endif - } - - return 0; - -out_unlock: - - kvm_for_each_vcpu(j, vcpu, kvm) { - if (i =3D=3D j) - break; - -#ifdef CONFIG_PROVE_LOCKING - if (j) - mutex_acquire(&vcpu->mutex.dep_map, role, 0, _THIS_IP_); -#endif - - mutex_unlock(&vcpu->mutex); - } - return -EINTR; -} - -static void sev_unlock_vcpus_for_migration(struct kvm *kvm) -{ - struct kvm_vcpu *vcpu; - unsigned long i; - bool first =3D true; - - kvm_for_each_vcpu(i, vcpu, kvm) { - if (first) - first =3D false; - else - mutex_acquire(&vcpu->mutex.dep_map, - SEV_NR_MIGRATION_ROLES, 0, _THIS_IP_); - - mutex_unlock(&vcpu->mutex); - } -} - static void sev_migrate_from(struct kvm *dst_kvm, struct kvm *src_kvm) { struct kvm_sev_info *dst =3D to_kvm_sev_info(dst_kvm); @@ -2083,10 +2019,10 @@ int sev_vm_move_enc_context_from(struct kvm *kvm, u= nsigned int source_fd) charged =3D true; } =20 - ret =3D sev_lock_vcpus_for_migration(kvm, SEV_MIGRATION_SOURCE); + ret =3D kvm_lock_all_vcpus(kvm); if (ret) goto out_dst_cgroup; - ret =3D sev_lock_vcpus_for_migration(source_kvm, SEV_MIGRATION_TARGET); + ret =3D kvm_lock_all_vcpus(source_kvm); if (ret) goto out_dst_vcpu; =20 @@ -2100,9 +2036,9 @@ int sev_vm_move_enc_context_from(struct kvm *kvm, uns= igned int source_fd) ret =3D 0; =20 out_source_vcpu: - sev_unlock_vcpus_for_migration(source_kvm); + kvm_unlock_all_vcpus(source_kvm); out_dst_vcpu: - sev_unlock_vcpus_for_migration(kvm); + kvm_unlock_all_vcpus(kvm); out_dst_cgroup: /* Operates on the source on success, on the destination on failure. */ if (charged) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 10d6652c7aa0..a6140415c693 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1016,6 +1016,7 @@ static inline struct kvm_vcpu *kvm_get_vcpu_by_id(str= uct kvm *kvm, int id) void kvm_destroy_vcpus(struct kvm *kvm); =20 int kvm_trylock_all_vcpus(struct kvm *kvm); +int kvm_lock_all_vcpus(struct kvm *kvm); void kvm_unlock_all_vcpus(struct kvm *kvm); =20 void vcpu_load(struct kvm_vcpu *vcpu); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 834f08dfa24c..9211b07b0565 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1392,6 +1392,31 @@ int kvm_trylock_all_vcpus(struct kvm *kvm) } EXPORT_SYMBOL_GPL(kvm_trylock_all_vcpus); =20 +/* + * Lock all of the VM's vCPUs. + * Assumes that the kvm->lock is held. + * Returns -EINTR if the process is killed. + */ +int kvm_lock_all_vcpus(struct kvm *kvm) +{ + struct kvm_vcpu *vcpu; + unsigned long i, j; + + kvm_for_each_vcpu(i, vcpu, kvm) + if (mutex_lock_killable_nest_lock(&vcpu->mutex, &kvm->lock)) + goto out_unlock; + return 0; + +out_unlock: + kvm_for_each_vcpu(j, vcpu, kvm) { + if (i =3D=3D j) + break; + mutex_unlock(&vcpu->mutex); + } + return -EINTR; +} +EXPORT_SYMBOL_GPL(kvm_lock_all_vcpus); + void kvm_unlock_all_vcpus(struct kvm *kvm) { struct kvm_vcpu *vcpu; --=20 2.46.0