From nobody Fri Dec 19 20:16:33 2025 Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F201A28136C for ; Fri, 16 May 2025 19:20:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747423202; cv=none; b=JiNCNfe3omJqrnGGtczKl/7uY4XzbTl/ponXFiS/zkTy2tDmSuns8F9t6lGBAVD8QBcboqhZsJshA8A1uhFmIj6sb2EIQafUzPISjSdWkjrv5JuXBYqBCQCZGXYUCFiERzWh2SD1IhU5YO8ctYL5AQF1+C9q+SsEHJL8jW3E0MM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747423202; c=relaxed/simple; bh=Z+Ymw5KorVqxl1MWcDtTMy/41FachyKB8z3Xc1HhzcI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=l/91l+wY0dCJhpxL4cHDEdtIJbLz+eG+9jtpeG0iNbsJ6yzB7scerzTom0oOtJyzmI67QZILhHnrGeM5mrnonf94SziUbbbuP0kKTCNiplSYjK9Gdt0FuQ0jXbjrfrCKukaXzIT+wSOodpe9TTToXDMMwsa3nBBT36RZ1PF24DU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--afranji.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=fOT4YVaE; arc=none smtp.client-ip=209.85.210.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--afranji.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="fOT4YVaE" Received: by mail-pf1-f201.google.com with SMTP id d2e1a72fcca58-740270e168aso2305912b3a.1 for ; Fri, 16 May 2025 12:20:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747423200; x=1748028000; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=i+O0+h0SP1zdyUU/AUBngE1nKYK7W0UZEZfvwyilFUE=; b=fOT4YVaEe85yMPIq6UbKaGDTj3/KrNF4V3448D7qflMl+JSLnxJufSjNAtQ1huqJj8 UJKCcQbUb4Q+MRWwZV3EzH2HeriR4XGJAhVozwlykIZX60s9q3j5SCbuZCYa+hGaofKH 2RMPbjwkGnCcD7/wWXBc48/utGzQDYccVYWjs8pc8zx7SfOyuaeG2wm+rmBD9bqPMlJt GJF+8BE0T03MbnljiKYecHZqIGcv586gP1VQlkD+pbnZC6oim5YjVNhCJzoFsRTvpwG5 z9paIiiYep/nCq7pxCLAkyE8K1FwV22fvYyfPRsIw+P8OzWMpBV0RQpj9X3Z/so/LTMk gO0w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747423200; x=1748028000; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=i+O0+h0SP1zdyUU/AUBngE1nKYK7W0UZEZfvwyilFUE=; b=SDiKUDi2LKuWsY+5VBNO+mT9IyFg1Ek017icXTjAkqJxqSP04KiRGF1uUNLN1XG8Vh uN8Qj53e3tdD6fzuk3tEfvgerOE2QVWvNn7DvUewp/Mwk+Svxy8X4NQ1bfPuCShs8j4W DtGT6ndr2I/k+cr0g7l8CbEUdG1JmQ+b3uHhFIIv/M/DY2nN01NftvjzqCvgR9dvn5fu 6k3ZJ/WgDajRGMvJ21gPQI5mz0pkoxFsX8HvNtulVSLPq4DW4k0ggfHGb3NUiQZ/VEHv B19u5C4T3gYeUg1N1jzaDXgYtSBqg8+InnyZJBhWvQukk0v+PCUEKnWq29ZqnA074qkh T/8g== X-Forwarded-Encrypted: i=1; AJvYcCXCqS0w4K1Jp8EEqvMrM2kpYUv5QpVIPbxctV74Xmgqc+1f/7USKJEip4AbHsG1uhdohlJcQOL0bbGUpVA=@vger.kernel.org X-Gm-Message-State: AOJu0Yx4rwz9+Yxt592c9wehRg5SnieTQfeeW73XNww6W28oQBwlIz0H Jt0MJHI3JdDtKm1VSG0zHhls95SJ+Ug+IeBMNiQEVKK44PFG6RAHwwmxlfS6JMtLeanrFKFzJkv mg79TTJYK4Q== X-Google-Smtp-Source: AGHT+IG5/VnWZMp1URaxTB5fGhpyao745JN7FMyydoGLfEMB3xGNC9Y03XfePEOK3stFRMn8KrJaxzImVTNi X-Received: from pfux1.prod.google.com ([2002:a05:6a00:bc1:b0:741:2a97:6ae2]) (user=afranji job=prod-delivery.src-stubby-dispatcher) by 2002:aa7:888e:0:b0:742:a23e:2a68 with SMTP id d2e1a72fcca58-742a98a2437mr5530195b3a.15.1747423200276; Fri, 16 May 2025 12:20:00 -0700 (PDT) Date: Fri, 16 May 2025 19:19:28 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.49.0.1101.gccaa498523-goog Message-ID: Subject: [RFC PATCH v2 08/13] KVM: x86: Refactor common code out of sev.c From: Ryan Afranji To: afranji@google.com, ackerleytng@google.com, pbonzini@redhat.com, seanjc@google.com, tglx@linutronix.de, x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, tabba@google.com Cc: mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, andrew.jones@linux.dev, ricarkol@google.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, yu.c.zhang@linux.intel.com, vannapurve@google.com, erdemaktas@google.com, mail@maciej.szmigiero.name, vbabka@suse.cz, david@redhat.com, qperret@google.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, sagis@google.com, jthoughton@google.com Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Ackerley Tng Split sev_lock_two_vms() into kvm_mark_migration_in_progress() and kvm_lock_two_vms() and refactor sev.c to use these two new functions. Co-developed-by: Sagi Shahar Signed-off-by: Sagi Shahar Co-developed-by: Vishal Annapurve Signed-off-by: Vishal Annapurve Signed-off-by: Ackerley Tng Signed-off-by: Ryan Afranji --- arch/x86/kvm/svm/sev.c | 60 ++++++++++------------------------------ arch/x86/kvm/x86.c | 62 ++++++++++++++++++++++++++++++++++++++++++ arch/x86/kvm/x86.h | 6 ++++ 3 files changed, 82 insertions(+), 46 deletions(-) diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 89c06cfcc200..b3048ec411e2 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -1836,47 +1836,6 @@ static bool is_cmd_allowed_from_mirror(u32 cmd_id) return false; } =20 -static int sev_lock_two_vms(struct kvm *dst_kvm, struct kvm *src_kvm) -{ - int r =3D -EBUSY; - - if (dst_kvm =3D=3D src_kvm) - return -EINVAL; - - /* - * Bail if these VMs are already involved in a migration to avoid - * deadlock between two VMs trying to migrate to/from each other. - */ - if (atomic_cmpxchg_acquire(&dst_kvm->migration_in_progress, 0, 1)) - return -EBUSY; - - if (atomic_cmpxchg_acquire(&src_kvm->migration_in_progress, 0, 1)) - goto release_dst; - - r =3D -EINTR; - if (mutex_lock_killable(&dst_kvm->lock)) - goto release_src; - if (mutex_lock_killable_nested(&src_kvm->lock, SINGLE_DEPTH_NESTING)) - goto unlock_dst; - return 0; - -unlock_dst: - mutex_unlock(&dst_kvm->lock); -release_src: - atomic_set_release(&src_kvm->migration_in_progress, 0); -release_dst: - atomic_set_release(&dst_kvm->migration_in_progress, 0); - return r; -} - -static void sev_unlock_two_vms(struct kvm *dst_kvm, struct kvm *src_kvm) -{ - mutex_unlock(&dst_kvm->lock); - mutex_unlock(&src_kvm->lock); - atomic_set_release(&dst_kvm->migration_in_progress, 0); - atomic_set_release(&src_kvm->migration_in_progress, 0); -} - /* vCPU mutex subclasses. */ enum sev_migration_role { SEV_MIGRATION_SOURCE =3D 0, @@ -2057,9 +2016,12 @@ int sev_vm_move_enc_context_from(struct kvm *kvm, un= signed int source_fd) return -EBADF; =20 source_kvm =3D fd_file(f)->private_data; - ret =3D sev_lock_two_vms(kvm, source_kvm); + ret =3D kvm_mark_migration_in_progress(kvm, source_kvm); if (ret) return ret; + ret =3D kvm_lock_two_vms(kvm, source_kvm); + if (ret) + goto out_mark_migration_done; =20 if (kvm->arch.vm_type !=3D source_kvm->arch.vm_type || sev_guest(kvm) || !sev_guest(source_kvm)) { @@ -2105,7 +2067,9 @@ int sev_vm_move_enc_context_from(struct kvm *kvm, uns= igned int source_fd) put_misc_cg(cg_cleanup_sev->misc_cg); cg_cleanup_sev->misc_cg =3D NULL; out_unlock: - sev_unlock_two_vms(kvm, source_kvm); + kvm_unlock_two_vms(kvm, source_kvm); +out_mark_migration_done: + kvm_mark_migration_done(kvm, source_kvm); return ret; } =20 @@ -2779,9 +2743,12 @@ int sev_vm_copy_enc_context_from(struct kvm *kvm, un= signed int source_fd) return -EBADF; =20 source_kvm =3D fd_file(f)->private_data; - ret =3D sev_lock_two_vms(kvm, source_kvm); + ret =3D kvm_mark_migration_in_progress(kvm, source_kvm); if (ret) return ret; + ret =3D kvm_lock_two_vms(kvm, source_kvm); + if (ret) + goto e_mark_migration_done; =20 /* * Mirrors of mirrors should work, but let's not get silly. Also @@ -2821,9 +2788,10 @@ int sev_vm_copy_enc_context_from(struct kvm *kvm, un= signed int source_fd) * KVM contexts as the original, and they may have different * memory-views. */ - e_unlock: - sev_unlock_two_vms(kvm, source_kvm); + kvm_unlock_two_vms(kvm, source_kvm); +e_mark_migration_done: + kvm_mark_migration_done(kvm, source_kvm); return ret; } =20 diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index f6ce044b090a..422c66a033d2 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -4502,6 +4502,68 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct= msr_data *msr_info) } EXPORT_SYMBOL_GPL(kvm_get_msr_common); =20 +int kvm_mark_migration_in_progress(struct kvm *dst_kvm, struct kvm *src_kv= m) +{ + int r; + + if (dst_kvm =3D=3D src_kvm) + return -EINVAL; + + /* + * Bail if these VMs are already involved in a migration to avoid + * deadlock between two VMs trying to migrate to/from each other. + */ + r =3D -EBUSY; + if (atomic_cmpxchg_acquire(&dst_kvm->migration_in_progress, 0, 1)) + return r; + + if (atomic_cmpxchg_acquire(&src_kvm->migration_in_progress, 0, 1)) + goto release_dst; + + return 0; + +release_dst: + atomic_set_release(&dst_kvm->migration_in_progress, 0); + return r; +} +EXPORT_SYMBOL_GPL(kvm_mark_migration_in_progress); + +void kvm_mark_migration_done(struct kvm *dst_kvm, struct kvm *src_kvm) +{ + atomic_set_release(&dst_kvm->migration_in_progress, 0); + atomic_set_release(&src_kvm->migration_in_progress, 0); +} +EXPORT_SYMBOL_GPL(kvm_mark_migration_done); + +int kvm_lock_two_vms(struct kvm *dst_kvm, struct kvm *src_kvm) +{ + int r; + + if (dst_kvm =3D=3D src_kvm) + return -EINVAL; + + r =3D -EINTR; + if (mutex_lock_killable(&dst_kvm->lock)) + return r; + + if (mutex_lock_killable_nested(&src_kvm->lock, SINGLE_DEPTH_NESTING)) + goto unlock_dst; + + return 0; + +unlock_dst: + mutex_unlock(&dst_kvm->lock); + return r; +} +EXPORT_SYMBOL_GPL(kvm_lock_two_vms); + +void kvm_unlock_two_vms(struct kvm *dst_kvm, struct kvm *src_kvm) +{ + mutex_unlock(&dst_kvm->lock); + mutex_unlock(&src_kvm->lock); +} +EXPORT_SYMBOL_GPL(kvm_unlock_two_vms); + /* * Read or write a bunch of msrs. All parameters are kernel addresses. * diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h index 88a9475899c8..508f9509546c 100644 --- a/arch/x86/kvm/x86.h +++ b/arch/x86/kvm/x86.h @@ -649,4 +649,10 @@ int ____kvm_emulate_hypercall(struct kvm_vcpu *vcpu, i= nt cpl, =20 int kvm_emulate_hypercall(struct kvm_vcpu *vcpu); =20 +int kvm_mark_migration_in_progress(struct kvm *dst_kvm, struct kvm *src_kv= m); +void kvm_mark_migration_done(struct kvm *dst_kvm, struct kvm *src_kvm); + +int kvm_lock_two_vms(struct kvm *dst_kvm, struct kvm *src_kvm); +void kvm_unlock_two_vms(struct kvm *dst_kvm, struct kvm *src_kvm); + #endif --=20 2.49.0.1101.gccaa498523-goog