From nobody Sat May 9 09:09:27 2026 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 933D01DA2F1 for ; Fri, 24 Jan 2025 19:11:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737745879; cv=none; b=dM7iwxySGvQpipVI5t7NUwJqbP70b0T7CpcQpDE95PIjIQg4B9aJE2uMm/V34AwxHKqPzHHFPv4QjzKSLS8LcVSxOCgYg44OtB8POP2wSh4nAb4CGdkbke40JOWpZxC5n8QGU/+w7kxEPuGIVSm/JB2jmKY78JGPegMi4MEeZno= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737745879; c=relaxed/simple; bh=RH92absaKrUE4bK8EtgJ/YFwduM51bFN3OmBTnhDyMk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=jyApp+20SEWPSKShIPZZr0mbaYVnHLKjjHRT7kHR7xEPJa3e4F6ePB86Y5H3dJV9pYuZkJfpe0eZH9G7S3TR1ejePVO6rfVnqt3E6X66AH9I1alE5V9o5s6gDMKPVJHZd3UqXZ+Q1ijbwSP1mnV2dxua4oXDIOsWlxsglsPQm+k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=ixRF41sl; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="ixRF41sl" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1737745876; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Ofob11nRTehHhQisKN8w0H93BnkkEyEmB8UI4f7CYrE=; b=ixRF41slJONy2qkgD8aF4hnRz1CO9cd18p8tRUiZzl7vJZOguYOpQCMxuNwSdAlh0ng40c jaM+3+euOuw9dKqlQ63H4xK7IdS8mQYjIp9rYuP3sSKjExkK2WQXYpJrqaTU1P2Rscz7IR EP8ArGc6jUbbKVvEqvcArJnVxnwD/9M= Received: from mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-624-T0eB1zQrOEyrP1ugJ2J6dQ-1; Fri, 24 Jan 2025 14:11:12 -0500 X-MC-Unique: T0eB1zQrOEyrP1ugJ2J6dQ-1 X-Mimecast-MFC-AGG-ID: T0eB1zQrOEyrP1ugJ2J6dQ Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id AEA801955DD5; Fri, 24 Jan 2025 19:11:11 +0000 (UTC) Received: from virtlab1023.lab.eng.rdu2.redhat.com (virtlab1023.lab.eng.rdu2.redhat.com [10.8.1.187]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 1529719560A7; Fri, 24 Jan 2025 19:11:10 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: seanjc@google.com Subject: [PATCH 1/2] KVM: x86: fix usage of kvm_lock in set_nx_huge_pages() Date: Fri, 24 Jan 2025 14:11:08 -0500 Message-ID: <20250124191109.205955-2-pbonzini@redhat.com> In-Reply-To: <20250124191109.205955-1-pbonzini@redhat.com> References: <20250124191109.205955-1-pbonzini@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 Content-Type: text/plain; charset="utf-8" Protect the whole function with kvm_lock() so that all accesses to nx_hugepage_mitigation_hard_disabled are under the lock; but drop it when calling out to the MMU to avoid complex circular locking situations such as the following: __kvm_set_memory_region() lock(&kvm->slots_lock) set_nx_huge_pages() lock(kvm_lock) lock(&kvm->slots_lock) __kvmclock_cpufreq_not= ifier() lock(cpu_hotplug_loc= k) lock(kvm_lock) = lock(&kvm->srcu) = kvm_lapic_set_base() = static_branch_inc() = lock(cpu_hotplug_lock) sync(&kvm->srcu) The deadlock is basically theoretical but anyway it is as follows: - __kvm_set_memory_region() waits for kvm->srcu with kvm->slots_lock taken - set_nx_huge_pages() waits for kvm->slots_lock with kvm_lock taken - __kvmclock_cpufreq_notifier() waits for kvm_lock with cpu_hotplug_lock ta= ken - KVM_RUN waits for cpu_hotplug_lock with kvm->srcu taken - therefore __kvm_set_memory_region() never completes synchronize_srcu(&kvm= ->srcu). To break the deadlock, release kvm_lock while taking kvm->slots_lock, which breaks the chain: lock(&kvm->slots_lock) set_nx_huge_pages() lock(kvm_lock) __kvmclock_cpufreq_not= ifier() lock(cpu_hotplug_loc= k) lock(kvm_lock) = lock(&kvm->srcu) = kvm_lapic_set_base() = static_branch_inc() = lock(cpu_hotplug_lock) unlock(kvm_lock) unlock(kvm_lock) unlock(cpu_hotplug_l= ock) = unlock(cpu_hotplug_lock) = unlock(&kvm->srcu) lock(&kvm->slots_lock) sync(&kvm->srcu) unlock(&kvm->slots_lock) unlock(&kvm->slots_lock) Signed-off-by: Paolo Bonzini --- arch/x86/kvm/mmu/mmu.c | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 2401606db260..1d8b45e7bb94 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -7114,6 +7114,7 @@ static int set_nx_huge_pages(const char *val, const s= truct kernel_param *kp) bool old_val =3D nx_huge_pages; bool new_val; =20 + guard(mutex)(&kvm_lock); if (nx_hugepage_mitigation_hard_disabled) return -EPERM; =20 @@ -7127,13 +7128,10 @@ static int set_nx_huge_pages(const char *val, const= struct kernel_param *kp) } else if (sysfs_streq(val, "never")) { new_val =3D 0; =20 - mutex_lock(&kvm_lock); if (!list_empty(&vm_list)) { - mutex_unlock(&kvm_lock); return -EBUSY; } nx_hugepage_mitigation_hard_disabled =3D true; - mutex_unlock(&kvm_lock); } else if (kstrtobool(val, &new_val) < 0) { return -EINVAL; } @@ -7143,16 +7141,19 @@ static int set_nx_huge_pages(const char *val, const= struct kernel_param *kp) if (new_val !=3D old_val) { struct kvm *kvm; =20 - mutex_lock(&kvm_lock); - list_for_each_entry(kvm, &vm_list, vm_list) { + kvm_get_kvm(kvm); + mutex_unlock(&kvm_lock); + mutex_lock(&kvm->slots_lock); kvm_mmu_zap_all_fast(kvm); mutex_unlock(&kvm->slots_lock); =20 vhost_task_wake(kvm->arch.nx_huge_page_recovery_thread); + + mutex_lock(&kvm_lock); + kvm_put_kvm(kvm); } - mutex_unlock(&kvm_lock); } =20 return 0; --=20 2.43.5 From nobody Sat May 9 09:09:27 2026 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C33E02253EF for ; Fri, 24 Jan 2025 19:11:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737745884; cv=none; b=MNPmCPQntjAoeHwUyk0f+3gWwgC3xNYhSMJ5N9Tve7FFlXc1sMXsaB+9U5zUJQHiEWXDppPBtIlidtO0ZAvqMfBYzcrP9o3mHuuC6KoSkKT1vWh2uLtcAOllKjMTMM2/kA4UVea7/wx6PlPQv79ut0IcPUDCTFddT/6vOJxY6ZM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737745884; c=relaxed/simple; bh=27nAJiwy8vNl5ENOjGy1QEtnVVB2ty2t0SrZitqHlPQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=eOM675hpTYtsojNIDnAInVVMeSh+NbIywxBlal/iau/ohzn3kPyGmdJFjA6DTQ/uwzAb6cxIMJxpPSFdYee0wrOas8iKdoaw5cxt6jblQdvSomx0QcBNK8MpDCBcZBH3Yjt+fNkTbpPa5+TSGIQSBNpxINyKk/apmeuPsY72n3I= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=Dg99iJBW; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Dg99iJBW" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1737745881; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=fFGYjJI+yyYMmuqTYYSVTNPCKJfp068RQBYfyuIBjkM=; b=Dg99iJBWREZN+4GoRXBCpk53d8pEch6T3//YC+yq9/3HgHU9PLllSdmrFRemFpdsEZeNpf ujo8tng+Yr//pB4Ti7ckOsAimByoX0Lfay/mPkgn/B79fQmUB3YBJRKACjiWDGv2fRwfAO TPpViE1Q/Ao/aTazm1HT4xvLpqe8SZg= Received: from mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-575-X02C7rrwMdKW3_1z0OzNHw-1; Fri, 24 Jan 2025 14:11:15 -0500 X-MC-Unique: X02C7rrwMdKW3_1z0OzNHw-1 X-Mimecast-MFC-AGG-ID: X02C7rrwMdKW3_1z0OzNHw Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 8AAC51801F26; Fri, 24 Jan 2025 19:11:12 +0000 (UTC) Received: from virtlab1023.lab.eng.rdu2.redhat.com (virtlab1023.lab.eng.rdu2.redhat.com [10.8.1.187]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id E27DC19560A7; Fri, 24 Jan 2025 19:11:11 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: seanjc@google.com Subject: [PATCH 2/2] Documentation: explain issues with taking locks inside kvm_lock Date: Fri, 24 Jan 2025 14:11:09 -0500 Message-ID: <20250124191109.205955-3-pbonzini@redhat.com> In-Reply-To: <20250124191109.205955-1-pbonzini@redhat.com> References: <20250124191109.205955-1-pbonzini@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 Content-Type: text/plain; charset="utf-8" kvm_lock should be used sparingly, and it is easy to protect vm_list walks with kvm_get_kvm and kvm_put_kvm. Make it a hard rule to drop kvm_lock before taking another mutex, and document it. Signed-off-by: Paolo Bonzini --- Documentation/virt/kvm/locking.rst | 27 ++++++++++++++++++++------- 1 file changed, 20 insertions(+), 7 deletions(-) diff --git a/Documentation/virt/kvm/locking.rst b/Documentation/virt/kvm/lo= cking.rst index c56d5f26c750..f94aad9b95ab 100644 --- a/Documentation/virt/kvm/locking.rst +++ b/Documentation/virt/kvm/locking.rst @@ -26,13 +26,6 @@ The acquisition orders for mutexes are as follows: are taken on the waiting side when modifying memslots, so MMU notifiers must not take either kvm->slots_lock or kvm->slots_arch_lock. =20 -cpus_read_lock() vs kvm_lock: - -- Taking cpus_read_lock() outside of kvm_lock is problematic, despite that - being the official ordering, as it is quite easy to unknowingly trigger - cpus_read_lock() while holding kvm_lock. Use caution when walking vm_li= st, - e.g. avoid complex operations when possible. - For SRCU: =20 - ``synchronize_srcu(&kvm->srcu)`` is called inside critical sections @@ -59,6 +52,23 @@ On x86: Everything else is a leaf: no other lock is taken inside the critical sections. =20 +In particular no other mutex should be taken inside kvm_lock, and the +amount of code that can be run inside kvm_lock should be limited; this +is because ``cpus_read_lock()`` might be triggered unknowingly and cause +a circular dependency. For example, if you take ``kvm->slots_lock`` +inside ``kvm_lock``, the following can happen on x86: + +- ``kvm->srcu`` is synchronized with ``kvm->slots_lock`` taken +- you wait for ``kvm->slots_lock`` with ``kvm_lock`` taken +- ``__kvmclock_cpufreq_notifier()`` waits for ``kvm_lock`` and + is called within ``cpus_read_lock()``. +- ``KVM_RUN`` can trigger static key updates, which call ``cpus_read_lock(= )``, + with ``kvm->srcu`` taken +- therefore ``synchronize_srcu(&kvm->srcu)`` never completes. + +This rule applies to all architectures. + + 2. Exception ------------ =20 @@ -238,6 +248,9 @@ time it will be set using the Dirty tracking mechanism = described above. :Type: mutex :Arch: any :Protects: - vm_list + - kvm_createvm_count + - kvm_active_vms +:Comment: Do not take any mutex inside. =20 ``kvm_usage_lock`` ^^^^^^^^^^^^^^^^^^ --=20 2.43.5