From nobody Tue Apr 7 02:37:15 2026 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A26A133859C for ; Mon, 16 Mar 2026 17:35:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773682531; cv=none; b=arqZWH1GvYUsxQ5zSesDKWzbyqL28Qn99rREvcCNsXUfITxti1bAU0hHfzsTDsT/kztdtZX4CoVspOjClpZ1hlGeHEu2HKjVX3uGGc6MeHZcGQ1IjSvK3CUm5qzNGsObvdgyIkdzcDrFSTq4q6PxD1VLJKLkNIsBQxh58pJXSrA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773682531; c=relaxed/simple; bh=AALkJl92/GpB0vDXcTqf/Xhr5ubXN5L/mRxqftaIeJI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=tIznjr9cjgrnnwRGvD9dnqCS0EnG5sDrgMeardokl+amj9jQ3A79xQUiT0Z6wvb6RzG3l+iUdLUJWnEU5osadEr9Zg7OOKqdoWoE3wYmxFgtktbXNr/1Q1BAjeX1Kkxyflx4HF8MHR+a5yBUTLqgB1EfaLzfivO8TzniPDZfYPs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=nNB57eJR; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="nNB57eJR" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-485390246c8so40738765e9.0 for ; Mon, 16 Mar 2026 10:35:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20251104; t=1773682529; x=1774287329; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=DrkEb2n12YyrcySqlJ1Endt4MHN6ko+q0The82lLn1s=; b=nNB57eJRNHOM2ZV7boIZI1BNOyP176SjbM/CTZX8L24H9MiNjLzwC5wx9lyWqgs6Fj 4k4rPvyFyEwBk4IL6BIAqODaPz3XzPQ+OL/JDiUKAh7c1S6EaB5IV+cYT/FCgwh9Gz14 UJ81JNzRUDtcStNzom8k1SKoe5cQfLiBBo7QFAJibGqdLDj0uYlQVXhCw10S3hR3OzOr kCpl7z3tJtDUsXZvtwkkR4RaWJ6pAXbJL6RAorYXFVeBmWtxKf5cJwgS84wsJgaOSs22 C75DFr49I/hkfLg4iysaA3AvNepuICAASeXYWc6ZMx6IOWfo9ZQbxOwbQCVZPXdrMjhB 8C4A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1773682529; x=1774287329; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=DrkEb2n12YyrcySqlJ1Endt4MHN6ko+q0The82lLn1s=; b=hsAcHwpN+45OC+HkQTEci7RLPec3lNQuH9l4yUGCdLpfu1JoJDHhj7LknKIyLrGI0f /hSQtioi4bOoW8LPu3y//SnTcarwsXvmCNXuFS8fFmqpNCo3lJFc7u8WU+hWKkBN2+B0 B+c5bFUSeC3FdRYqXVamdialbP206R1abSSZP0ZgaC0H9gn3aJ5vAzUoIFaqHMIEW9fn BgkEZWSYL9UB4w1qwZ9M1FvJQ4W7TWyfIFmThZ5jGNoITFpnYPdKE/NqORRGI3i+mbIW HYMSfTS/14SqNUQZMUaWoMdKr3aHaxbYuNkHSKZgnn++b3xlf0gcelO9wBR5sQs5xVUY UGPw== X-Forwarded-Encrypted: i=1; AJvYcCVMuXxV/7zekkRs6j1wHegAMzgffjfq30sSWhybUyV1NZNER3eeWaLszOBOfTP+N+SuSYIDGLFbgBkMBVw=@vger.kernel.org X-Gm-Message-State: AOJu0YxbddAvv3KrAK2hGMR6frwrOmqNhgwkIgPthoHDeHscJklK1jE8 EA0G72rNK+drJEG6yVXrcyML+3irBAVf9BqPOm3RI1QHoz8g2SVcPbxghAlICXw5p0z9bEuXtYk IRA== X-Received: from wmno9.prod.google.com ([2002:a05:600c:1649:b0:485:38ca:51c9]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:354b:b0:480:2521:4d92 with SMTP id 5b1f17b1804b1-485567071a0mr226576455e9.24.1773682528999; Mon, 16 Mar 2026 10:35:28 -0700 (PDT) Date: Mon, 16 Mar 2026 17:35:22 +0000 In-Reply-To: <20260316-tabba-el2_guard-v1-0-456875a2c6db@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260316-tabba-el2_guard-v1-0-456875a2c6db@google.com> X-Mailer: b4 0.14.3 Message-ID: <20260316-tabba-el2_guard-v1-1-456875a2c6db@google.com> Subject: [PATCH 01/10] KVM: arm64: Add scoped resource management (guard) for hyp_spinlock From: Fuad Tabba To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon , "KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64" , "KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64" , open list Cc: Fuad Tabba Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable The Linux kernel recently introduced scoped resource management macros in , enabling RAII-like patterns such as guard() and scoped_guard(). These macros significantly reduce the risk of resource leaks, deadlocks, and messy unwinding paths. The arm64 KVM EL2 hypervisor heavily relies on its own locking primitive, hyp_spinlock_t. Managing these locks carefully across complex failure paths is critical, as missing an unlock at EL2 typically results in a lethal system-wide panic. Add support for the guard(hyp_spinlock) and scoped_guard(hyp_spinlock) macros by including and using the DEFINE_LOCK_GUARD_1 infrastructure in the spinlock header. This paves the way for converting error-prone manual locking into automatic scoped management. Change-Id: Iba6d43c081b5fdf2496dc599fd6a781294493cb9 Signed-off-by: Fuad Tabba --- arch/arm64/kvm/hyp/include/nvhe/spinlock.h | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/arch/arm64/kvm/hyp/include/nvhe/spinlock.h b/arch/arm64/kvm/hy= p/include/nvhe/spinlock.h index 7c7ea8c55405..2681f8d2fde5 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/spinlock.h +++ b/arch/arm64/kvm/hyp/include/nvhe/spinlock.h @@ -16,6 +16,7 @@ #include #include #include +#include =20 typedef union hyp_spinlock { u32 __val; @@ -98,6 +99,10 @@ static inline void hyp_spin_unlock(hyp_spinlock_t *lock) : "memory"); } =20 +DEFINE_LOCK_GUARD_1(hyp_spinlock, hyp_spinlock_t, + hyp_spin_lock(_T->lock), + hyp_spin_unlock(_T->lock)) + static inline bool hyp_spin_is_locked(hyp_spinlock_t *lock) { hyp_spinlock_t lockval =3D READ_ONCE(*lock); --=20 2.53.0.851.ga537e3e6e9-goog From nobody Tue Apr 7 02:37:15 2026 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C790033A708 for ; Mon, 16 Mar 2026 17:35:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773682533; cv=none; b=DkS/LOMKrVERCkaqpwXC5EOr5sjtML090zolCLKhxvogrzkoB5HfVBLUb3tLy3b1rjE7NFyCScRpMo5iz+sfUcWAnIFuDzbBzHgiv8KyZ1JFn8CuNGufFYoKJJiJFc+bbq4EPhi2QdXntBu9n8tto45eixcjWMetHdnsDh24+kI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773682533; c=relaxed/simple; bh=mascMBHyMxL7+rhIsZVuyrQ1poIA8jNmRxptuWKdwlo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=C+V3FX1uum1xsLRrjb2/orTW/J0OKBq+JUuRQTqJkAXfwIZYpb1qNZ1ViZIOL/05eKEHwMnETWIElUNkRv7p/pTJjbzxI3vwawzOT2FkkXIhbFmH8WDVojzIvXdtawILlqJVj86V50K65K8xq5UfQusTRYQEptVpC3SV+1tp+Do= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=vEDKeUnZ; arc=none smtp.client-ip=209.85.221.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="vEDKeUnZ" Received: by mail-wr1-f74.google.com with SMTP id ffacd0b85a97d-439a85832c0so4204597f8f.2 for ; Mon, 16 Mar 2026 10:35:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20251104; t=1773682530; x=1774287330; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=V0ORKdqRW0mEfmd4VRP/igA0BU9vYKKtq0fYCFbuf90=; b=vEDKeUnZXsL7QXxIkUr5eVWlOvUx44odG/bUXSRtwiI80ec4ZkfuCGrllPOll5Gfx4 fCFPLupUMc9X5Cgp1wNoYIF36A4Txq4gIwka8/osLMhQwj+MUK/nWUHYMmpX12elQ0Tj /mKxHv3Y8asJMa17G0zAOWhual0N0+HQh+KIJeI4/n+yyNyfApnW5/A5MUtDYPT4EC5a Lfz+rkV5sjfAuggoncTnC4EavEfGhj7Qlt5KrqkrwlNtPrM4hVDs2lzS3YwyQwJvoS5j J6BHMBqY+LgH7zizFgERWO1G14xH5Bx4NHHwgcntkOwk2P3qZ4lVLrgUI0YYdaxsPY6A Gb3w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1773682530; x=1774287330; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=V0ORKdqRW0mEfmd4VRP/igA0BU9vYKKtq0fYCFbuf90=; b=b/rImdwll5DPkmuh2qxFklhkKZ82VUethP4A4ljfcVOy57cKdBrTTotoYTHla4I0v5 pOxAf2o+D6myNt9/THxAIpJzLxajc6CjFGjRkOBLI4fVojrVtxxw04MtUDsM/6Pijpp/ WXaNJt/8Us6sniXR1deUIbtGgBFyCA9SBokHO+37JPsjlCwNQ38Bh0eOhpBCUh9xnYS8 B3lnG9caCJuPFKgXoBEOSsRANfuHPspWJP31eJbgDs2bC5KPQv1fbIjM9GLZgsTEwjvi OdrRfIrJ4qZxHwfy50XKqr5l0RZVIKNKGFQpEwCx3Gz5OA6YWqW09IO/tPMdZqRMMyOD 09gg== X-Forwarded-Encrypted: i=1; AJvYcCW9hZxRFE1Boj5eq40X/1NC5ttd3Aar7hEDJRReQRrRmn+tQPwNRM96OxhHp7xmLGIkN/tCUvOTw8VQYyM=@vger.kernel.org X-Gm-Message-State: AOJu0Yw4eVnCwri90vFyK4eWlpBU0OZOCV16MD+3bxVEwRoyege4f5iT 0Dptdvugd/33erLSuvpSmSIVolEPX8KzYA9MOdxWhGZ9gbY9v647kFD/2ocG5uBfXA1lpu6IRJF pPA== X-Received: from wmtf12.prod.google.com ([2002:a05:600c:8b4c:b0:483:6fe1:c054]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:46d2:b0:485:3a03:ced1 with SMTP id 5b1f17b1804b1-4855672737fmr240611635e9.28.1773682530109; Mon, 16 Mar 2026 10:35:30 -0700 (PDT) Date: Mon, 16 Mar 2026 17:35:23 +0000 In-Reply-To: <20260316-tabba-el2_guard-v1-0-456875a2c6db@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260316-tabba-el2_guard-v1-0-456875a2c6db@google.com> X-Mailer: b4 0.14.3 Message-ID: <20260316-tabba-el2_guard-v1-2-456875a2c6db@google.com> Subject: [PATCH 02/10] KVM: arm64: Use guard(hyp_spinlock) in page_alloc.c From: Fuad Tabba To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon , "KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64" , "KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64" , open list Cc: Fuad Tabba Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Migrate the manual hyp_spin_lock() and hyp_spin_unlock() calls in hyp_put_page, hyp_get_page, and hyp_alloc_pages to use the new guard(hyp_spinlock) macro. This eliminates the need for manual unlock calls on return paths. Specifically, in hyp_alloc_pages, this simplifies the early return error path by removing the manual unlock and resolving single-line curly brace styling, reducing the potential for future lock-leak regressions. Change-Id: I37bb8236dbfff9b58bda0937a78d2057036599b4 Signed-off-by: Fuad Tabba --- arch/arm64/kvm/hyp/nvhe/page_alloc.c | 13 ++++--------- 1 file changed, 4 insertions(+), 9 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/page_alloc.c b/arch/arm64/kvm/hyp/nvhe= /page_alloc.c index a1eb27a1a747..f43d8ad507e9 100644 --- a/arch/arm64/kvm/hyp/nvhe/page_alloc.c +++ b/arch/arm64/kvm/hyp/nvhe/page_alloc.c @@ -167,18 +167,16 @@ void hyp_put_page(struct hyp_pool *pool, void *addr) { struct hyp_page *p =3D hyp_virt_to_page(addr); =20 - hyp_spin_lock(&pool->lock); + guard(hyp_spinlock)(&pool->lock); __hyp_put_page(pool, p); - hyp_spin_unlock(&pool->lock); } =20 void hyp_get_page(struct hyp_pool *pool, void *addr) { struct hyp_page *p =3D hyp_virt_to_page(addr); =20 - hyp_spin_lock(&pool->lock); + guard(hyp_spinlock)(&pool->lock); hyp_page_ref_inc(p); - hyp_spin_unlock(&pool->lock); } =20 void hyp_split_page(struct hyp_page *p) @@ -200,22 +198,19 @@ void *hyp_alloc_pages(struct hyp_pool *pool, u8 order) struct hyp_page *p; u8 i =3D order; =20 - hyp_spin_lock(&pool->lock); + guard(hyp_spinlock)(&pool->lock); =20 /* Look for a high-enough-order page */ while (i <=3D pool->max_order && list_empty(&pool->free_area[i])) i++; - if (i > pool->max_order) { - hyp_spin_unlock(&pool->lock); + if (i > pool->max_order) return NULL; - } =20 /* Extract it from the tree at the right order */ p =3D node_to_page(pool->free_area[i].next); p =3D __hyp_extract_page(pool, p, order); =20 hyp_set_page_refcounted(p); - hyp_spin_unlock(&pool->lock); =20 return hyp_page_to_virt(p); } --=20 2.53.0.851.ga537e3e6e9-goog From nobody Tue Apr 7 02:37:15 2026 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DD3023B7B76 for ; Mon, 16 Mar 2026 17:35:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773682534; cv=none; b=kfTKZRY1NJXq1oOarVBA5ygT0HVO+RlcPLcAC9fDzLFHE/8JsCL4p4xfifUKQoON3qoN1nQLLK3GJ2iHJCwrRFQa7Zpisa1X4tC19fba4c6YVOVuV8SPRbsd3ViY+lDCk6Ohj9pAqiWvTC0Xi4F2tVrFSTq9H2cf1Ou2wHxjOcc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773682534; c=relaxed/simple; bh=C4HiGUQNxtmsv7hJo1lWD9b+fUYFAtTKIh3xr/U3w90=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=pMZJ17x4KUkcEgciUWSS60VTWaqBczecVMwVlYnZ9dyDxRLdkKAb5siEQ47p2R3dIFjB3bZzrACHlZJsQeeOHG3dki7UmwCVFOAnYcatkC+/rc8nBFUpHSLCO3LagCzhyIsPqvp08yI50s/BEL8Hk1ommB3bDxdeRSb44yjIHI8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=EwhGuaSX; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="EwhGuaSX" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-48535f4d5e1so47240145e9.0 for ; Mon, 16 Mar 2026 10:35:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20251104; t=1773682531; x=1774287331; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=FvueIHWrftpZGm9WwSLw4m+mhIgN4K5gi1wLojgWmOA=; b=EwhGuaSXP7tAO0KZaJSKcj2vm5kwCdkuPz6wZnbu6afJZgh1cdI8t1jIfbDBLRSDa0 BMt86IJCxk0+KJcMifuKtcnaMpaUI3TSCEcyt6ZPwCFlM6kOUFtRstog9Svr4k+HJWmD 45aTfo1qeg2H40PHMzCldOy189WMvj3hZlyypwoJDXrpG/aIHNdFDlgFV1BLrLLUpSb6 dN6qpDVHK6fZnAgbeEg8Rs8SPf9PW2/pp0w9nbX7iOm0dl/8LECW0QsLwOxSiiuaLUqK 2vTZlyx1Nf9T7EHoYEUygO5FhJGvSquWFanRdzE2DYyEOXWrKcAMkax8zGDKdqqtu7xA Qgmw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1773682531; x=1774287331; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=FvueIHWrftpZGm9WwSLw4m+mhIgN4K5gi1wLojgWmOA=; b=c1OvioTY4/PTWGd3jTSE1j3kgmlhIQ3BIULrWSUgEBBZIyxNAMrKYmweFuBBRrh7Uc U1Nu9VmOeKFdHARGSl1+JKiCFTAx/mUvkNu1EwUUeDu/+Q9IcPoF+PjwPhIaWq9naPu9 lNw6hQl4XPXo/51UnHHxj8OMpL+wM1ouZP4g+zeM9SvtiPxV4DIDWfqgU38J2uX2rB2b LfgF+r+rFAr3urjvnqMdRGyw1W746dVPtoae5nghNxjCFCu6S6gXdrZC1VZJt9eEyW1j 98vcgGUuAvWODXfUzFZ6dgwwwhgKminz7MopK788vHpK69Ken7ms9GWq4U/OMei/jKuE yeig== X-Forwarded-Encrypted: i=1; AJvYcCU/eby4bSCXekMKG5ghBLOK52WwFCoYkHCywIw4J6FOWCRUcBzzMfViifRPavhrOpBh7xfrcNPA8P/7HBw=@vger.kernel.org X-Gm-Message-State: AOJu0Yykaxnd4N6zaZTJIfN2K3neQhBuLkLJsHOr3XBUlLFVdDXhDlOY 7HYppJs+5A0LIkwmUT+apo77NLjZoPmg4yNxcTtPcRNk2hOiWxDyNS9D+DPDFPFPgb72rcW3Pkv eNQ== X-Received: from wmim11.prod.google.com ([2002:a7b:cb8b:0:b0:483:5126:2240]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:a4a:b0:477:76bf:e1fb with SMTP id 5b1f17b1804b1-485566f7bd0mr229045355e9.16.1773682531263; Mon, 16 Mar 2026 10:35:31 -0700 (PDT) Date: Mon, 16 Mar 2026 17:35:24 +0000 In-Reply-To: <20260316-tabba-el2_guard-v1-0-456875a2c6db@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260316-tabba-el2_guard-v1-0-456875a2c6db@google.com> X-Mailer: b4 0.14.3 Message-ID: <20260316-tabba-el2_guard-v1-3-456875a2c6db@google.com> Subject: [PATCH 03/10] KVM: arm64: Use guard(hyp_spinlock) in ffa.c From: Fuad Tabba To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon , "KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64" , "KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64" , open list Cc: Fuad Tabba Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Migrate manual hyp_spin_lock() and hyp_spin_unlock() calls managing host_buffers.lock and version_lock to use the guard(hyp_spinlock) macro. This eliminates manual unlock calls on return paths and simplifies error handling by replacing goto labels with direct returns. Change-Id: I52e31c0bed3d2772c800a535af8abdabd81a178b Signed-off-by: Fuad Tabba --- arch/arm64/kvm/hyp/nvhe/ffa.c | 86 +++++++++++++++++++--------------------= ---- 1 file changed, 38 insertions(+), 48 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/ffa.c b/arch/arm64/kvm/hyp/nvhe/ffa.c index 94161ea1cd60..0c772501c3ba 100644 --- a/arch/arm64/kvm/hyp/nvhe/ffa.c +++ b/arch/arm64/kvm/hyp/nvhe/ffa.c @@ -239,6 +239,8 @@ static void do_ffa_rxtx_map(struct arm_smccc_1_2_regs *= res, int ret =3D 0; void *rx_virt, *tx_virt; =20 + guard(hyp_spinlock)(&host_buffers.lock); + if (npages !=3D (KVM_FFA_MBOX_NR_PAGES * PAGE_SIZE) / FFA_PAGE_SIZE) { ret =3D FFA_RET_INVALID_PARAMETERS; goto out; @@ -249,10 +251,9 @@ static void do_ffa_rxtx_map(struct arm_smccc_1_2_regs = *res, goto out; } =20 - hyp_spin_lock(&host_buffers.lock); if (host_buffers.tx) { ret =3D FFA_RET_DENIED; - goto out_unlock; + goto out; } =20 /* @@ -261,7 +262,7 @@ static void do_ffa_rxtx_map(struct arm_smccc_1_2_regs *= res, */ ret =3D ffa_map_hyp_buffers(npages); if (ret) - goto out_unlock; + goto out; =20 ret =3D __pkvm_host_share_hyp(hyp_phys_to_pfn(tx)); if (ret) { @@ -292,8 +293,6 @@ static void do_ffa_rxtx_map(struct arm_smccc_1_2_regs *= res, host_buffers.tx =3D tx_virt; host_buffers.rx =3D rx_virt; =20 -out_unlock: - hyp_spin_unlock(&host_buffers.lock); out: ffa_to_smccc_res(res, ret); return; @@ -306,7 +305,7 @@ static void do_ffa_rxtx_map(struct arm_smccc_1_2_regs *= res, __pkvm_host_unshare_hyp(hyp_phys_to_pfn(tx)); err_unmap: ffa_unmap_hyp_buffers(); - goto out_unlock; + goto out; } =20 static void do_ffa_rxtx_unmap(struct arm_smccc_1_2_regs *res, @@ -315,15 +314,16 @@ static void do_ffa_rxtx_unmap(struct arm_smccc_1_2_re= gs *res, DECLARE_REG(u32, id, ctxt, 1); int ret =3D 0; =20 + guard(hyp_spinlock)(&host_buffers.lock); + if (id !=3D HOST_FFA_ID) { ret =3D FFA_RET_INVALID_PARAMETERS; goto out; } =20 - hyp_spin_lock(&host_buffers.lock); if (!host_buffers.tx) { ret =3D FFA_RET_INVALID_PARAMETERS; - goto out_unlock; + goto out; } =20 hyp_unpin_shared_mem(host_buffers.tx, host_buffers.tx + 1); @@ -336,8 +336,6 @@ static void do_ffa_rxtx_unmap(struct arm_smccc_1_2_regs= *res, =20 ffa_unmap_hyp_buffers(); =20 -out_unlock: - hyp_spin_unlock(&host_buffers.lock); out: ffa_to_smccc_res(res, ret); } @@ -421,15 +419,16 @@ static void do_ffa_mem_frag_tx(struct arm_smccc_1_2_r= egs *res, int ret =3D FFA_RET_INVALID_PARAMETERS; u32 nr_ranges; =20 + guard(hyp_spinlock)(&host_buffers.lock); + if (fraglen > KVM_FFA_MBOX_NR_PAGES * PAGE_SIZE) goto out; =20 if (fraglen % sizeof(*buf)) goto out; =20 - hyp_spin_lock(&host_buffers.lock); if (!host_buffers.tx) - goto out_unlock; + goto out; =20 buf =3D hyp_buffers.tx; memcpy(buf, host_buffers.tx, fraglen); @@ -444,15 +443,13 @@ static void do_ffa_mem_frag_tx(struct arm_smccc_1_2_r= egs *res, */ ffa_mem_reclaim(res, handle_lo, handle_hi, 0); WARN_ON(res->a0 !=3D FFA_SUCCESS); - goto out_unlock; + goto out; } =20 ffa_mem_frag_tx(res, handle_lo, handle_hi, fraglen, endpoint_id); if (res->a0 !=3D FFA_SUCCESS && res->a0 !=3D FFA_MEM_FRAG_RX) WARN_ON(ffa_host_unshare_ranges(buf, nr_ranges)); =20 -out_unlock: - hyp_spin_unlock(&host_buffers.lock); out: if (ret) ffa_to_smccc_res(res, ret); @@ -482,6 +479,8 @@ static void __do_ffa_mem_xfer(const u64 func_id, u32 offset, nr_ranges, checked_offset; int ret =3D 0; =20 + guard(hyp_spinlock)(&host_buffers.lock); + if (addr_mbz || npages_mbz || fraglen > len || fraglen > KVM_FFA_MBOX_NR_PAGES * PAGE_SIZE) { ret =3D FFA_RET_INVALID_PARAMETERS; @@ -494,15 +493,14 @@ static void __do_ffa_mem_xfer(const u64 func_id, goto out; } =20 - hyp_spin_lock(&host_buffers.lock); if (!host_buffers.tx) { ret =3D FFA_RET_INVALID_PARAMETERS; - goto out_unlock; + goto out; } =20 if (len > ffa_desc_buf.len) { ret =3D FFA_RET_NO_MEMORY; - goto out_unlock; + goto out; } =20 buf =3D hyp_buffers.tx; @@ -513,30 +511,30 @@ static void __do_ffa_mem_xfer(const u64 func_id, offset =3D ep_mem_access->composite_off; if (!offset || buf->ep_count !=3D 1 || buf->sender_id !=3D HOST_FFA_ID) { ret =3D FFA_RET_INVALID_PARAMETERS; - goto out_unlock; + goto out; } =20 if (check_add_overflow(offset, sizeof(struct ffa_composite_mem_region), &= checked_offset)) { ret =3D FFA_RET_INVALID_PARAMETERS; - goto out_unlock; + goto out; } =20 if (fraglen < checked_offset) { ret =3D FFA_RET_INVALID_PARAMETERS; - goto out_unlock; + goto out; } =20 reg =3D (void *)buf + offset; nr_ranges =3D ((void *)buf + fraglen) - (void *)reg->constituents; if (nr_ranges % sizeof(reg->constituents[0])) { ret =3D FFA_RET_INVALID_PARAMETERS; - goto out_unlock; + goto out; } =20 nr_ranges /=3D sizeof(reg->constituents[0]); ret =3D ffa_host_share_ranges(reg->constituents, nr_ranges); if (ret) - goto out_unlock; + goto out; =20 ffa_mem_xfer(res, func_id, len, fraglen); if (fraglen !=3D len) { @@ -549,8 +547,6 @@ static void __do_ffa_mem_xfer(const u64 func_id, goto err_unshare; } =20 -out_unlock: - hyp_spin_unlock(&host_buffers.lock); out: if (ret) ffa_to_smccc_res(res, ret); @@ -558,7 +554,7 @@ static void __do_ffa_mem_xfer(const u64 func_id, =20 err_unshare: WARN_ON(ffa_host_unshare_ranges(reg->constituents, nr_ranges)); - goto out_unlock; + goto out; } =20 #define do_ffa_mem_xfer(fid, res, ctxt) \ @@ -583,7 +579,7 @@ static void do_ffa_mem_reclaim(struct arm_smccc_1_2_reg= s *res, =20 handle =3D PACK_HANDLE(handle_lo, handle_hi); =20 - hyp_spin_lock(&host_buffers.lock); + guard(hyp_spinlock)(&host_buffers.lock); =20 buf =3D hyp_buffers.tx; *buf =3D (struct ffa_mem_region) { @@ -594,7 +590,7 @@ static void do_ffa_mem_reclaim(struct arm_smccc_1_2_reg= s *res, ffa_retrieve_req(res, sizeof(*buf)); buf =3D hyp_buffers.rx; if (res->a0 !=3D FFA_MEM_RETRIEVE_RESP) - goto out_unlock; + goto out; =20 len =3D res->a1; fraglen =3D res->a2; @@ -611,13 +607,13 @@ static void do_ffa_mem_reclaim(struct arm_smccc_1_2_r= egs *res, fraglen > KVM_FFA_MBOX_NR_PAGES * PAGE_SIZE)) { ret =3D FFA_RET_ABORTED; ffa_rx_release(res); - goto out_unlock; + goto out; } =20 if (len > ffa_desc_buf.len) { ret =3D FFA_RET_NO_MEMORY; ffa_rx_release(res); - goto out_unlock; + goto out; } =20 buf =3D ffa_desc_buf.buf; @@ -628,7 +624,7 @@ static void do_ffa_mem_reclaim(struct arm_smccc_1_2_reg= s *res, ffa_mem_frag_rx(res, handle_lo, handle_hi, fragoff); if (res->a0 !=3D FFA_MEM_FRAG_TX) { ret =3D FFA_RET_INVALID_PARAMETERS; - goto out_unlock; + goto out; } =20 fraglen =3D res->a3; @@ -638,15 +634,13 @@ static void do_ffa_mem_reclaim(struct arm_smccc_1_2_r= egs *res, =20 ffa_mem_reclaim(res, handle_lo, handle_hi, flags); if (res->a0 !=3D FFA_SUCCESS) - goto out_unlock; + goto out; =20 reg =3D (void *)buf + offset; /* If the SPMD was happy, then we should be too. */ WARN_ON(ffa_host_unshare_ranges(reg->constituents, reg->addr_range_cnt)); -out_unlock: - hyp_spin_unlock(&host_buffers.lock); - +out: if (ret) ffa_to_smccc_res(res, ret); } @@ -774,13 +768,13 @@ static void do_ffa_version(struct arm_smccc_1_2_regs = *res, return; } =20 - hyp_spin_lock(&version_lock); + guard(hyp_spinlock)(&version_lock); if (has_version_negotiated) { if (FFA_MINOR_VERSION(ffa_req_version) < FFA_MINOR_VERSION(hyp_ffa_versi= on)) res->a0 =3D FFA_RET_NOT_SUPPORTED; else res->a0 =3D hyp_ffa_version; - goto unlock; + return; } =20 /* @@ -793,7 +787,7 @@ static void do_ffa_version(struct arm_smccc_1_2_regs *r= es, .a1 =3D ffa_req_version, }, res); if ((s32)res->a0 =3D=3D FFA_RET_NOT_SUPPORTED) - goto unlock; + return; =20 hyp_ffa_version =3D ffa_req_version; } @@ -804,8 +798,6 @@ static void do_ffa_version(struct arm_smccc_1_2_regs *r= es, smp_store_release(&has_version_negotiated, true); res->a0 =3D hyp_ffa_version; } -unlock: - hyp_spin_unlock(&version_lock); } =20 static void do_ffa_part_get(struct arm_smccc_1_2_regs *res, @@ -818,10 +810,10 @@ static void do_ffa_part_get(struct arm_smccc_1_2_regs= *res, DECLARE_REG(u32, flags, ctxt, 5); u32 count, partition_sz, copy_sz; =20 - hyp_spin_lock(&host_buffers.lock); + guard(hyp_spinlock)(&host_buffers.lock); if (!host_buffers.rx) { ffa_to_smccc_res(res, FFA_RET_BUSY); - goto out_unlock; + return; } =20 arm_smccc_1_2_smc(&(struct arm_smccc_1_2_regs) { @@ -834,16 +826,16 @@ static void do_ffa_part_get(struct arm_smccc_1_2_regs= *res, }, res); =20 if (res->a0 !=3D FFA_SUCCESS) - goto out_unlock; + return; =20 count =3D res->a2; if (!count) - goto out_unlock; + return; =20 if (hyp_ffa_version > FFA_VERSION_1_0) { /* Get the number of partitions deployed in the system */ if (flags & 0x1) - goto out_unlock; + return; =20 partition_sz =3D res->a3; } else { @@ -854,12 +846,10 @@ static void do_ffa_part_get(struct arm_smccc_1_2_regs= *res, copy_sz =3D partition_sz * count; if (copy_sz > KVM_FFA_MBOX_NR_PAGES * PAGE_SIZE) { ffa_to_smccc_res(res, FFA_RET_ABORTED); - goto out_unlock; + return; } =20 memcpy(host_buffers.rx, hyp_buffers.rx, copy_sz); -out_unlock: - hyp_spin_unlock(&host_buffers.lock); } =20 bool kvm_host_ffa_handler(struct kvm_cpu_context *host_ctxt, u32 func_id) --=20 2.53.0.851.ga537e3e6e9-goog From nobody Tue Apr 7 02:37:15 2026 Received: from mail-ej1-f74.google.com (mail-ej1-f74.google.com [209.85.218.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4922A3D0921 for ; Mon, 16 Mar 2026 17:35:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773682535; cv=none; b=kqeAw1wwRL2SNtVTFNbrTWWP8y4vTOguWimCJ3NRWllnbGAHgwuyAyRsGisSZ2c9PtEP8vg8J45uMjlMroQSK7yfQY7RSwsZjeBWTUULoc9K7m1pbB0B5RUDNMZ8VDT74cv3wlS57yP0EaItcj0cfmzuG99Gu2wiI47pHXrmzdU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773682535; c=relaxed/simple; bh=mXyhSE1MRAfCl+4+ivhGrwWepMYiNO9lJZZ3zNL2X4E=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=sHWlqmWEmGGOKEnvi3YX2fh1b1UuItsn9H1OihZg9jxWnyNn3E57cK4ykPU9TeozEPvpDuze/89WYHx03BB6btuAmnmBLepfRSkl0mZqKG3e6Gu3BfHggldLsV7PTYUE4zm7TBMOlI4OMR70n3YbcDVSjdrzhSGSlqNNEAFUBKI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=eKt1Waea; arc=none smtp.client-ip=209.85.218.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="eKt1Waea" Received: by mail-ej1-f74.google.com with SMTP id a640c23a62f3a-b844098869cso342583166b.2 for ; Mon, 16 Mar 2026 10:35:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20251104; t=1773682533; x=1774287333; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=tHQqMyuuFlwpT2UE52Jtk7JCykj8I76gbQARj/JlEmo=; b=eKt1Waea3GwFVYxBe3K9/ednKCg6Jt9V9z1AlCjHs6+zIZQGI48BfmNV3QsjYdJDbs yScwPsQeUdwLK8MdTsQYRZOO2cptEgcgfYmHTp5k8kJ363/bRTRXul+wPGF58PL8rYcG pJteybsEV78/GT6m+RJVtMDX4owzibB09hPZN9mwlADTNsnDaC79SmBJKp3tEyB4nLgJ X+2BuVYddg1YfXJitl3hmfa4hGgFg0WACxUrgADAgAh671xbL2bfs7ipibCJ9e2wzb3/ jVjJzp7lZ7qc4R4L6iXHHGs/gVMkVdy8mpkCt0CWYViYC2N3Y5GBbniagSpbP0VUnurO rrlQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1773682533; x=1774287333; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=tHQqMyuuFlwpT2UE52Jtk7JCykj8I76gbQARj/JlEmo=; b=ZpIybyN3ybnHfNpgqUcfGI2aySdneLnJGlZ0SJxOYSEJlnriF1j8LWqhPWxEiFyHCt lNtUckqIwXHaEPAO9+XfNT1kqgFdQthzHU9FYwB3v+9tNmhTdJnbn0Db8Xliy5mENIr5 nqtV7XSqtat1zqzq/9QMqpX/GTfmcQtgcsk22aO9IYW0adWn7WM4tZtONuPmyuVPMEZo Dbpy9hG4ZpwWxt/D3Duw9PbezGT3c8EGZQNCdK4yhevvhS4MP/4JqgfSh2GebIK4M5ZC /4YEM84WEQ2kPXqPJodxyleMcIU1/jYY4COhhanpq92t5y/MIjYSovexqno5Pc+t8RUI +t2A== X-Forwarded-Encrypted: i=1; AJvYcCWzBQr7amrJUo92Q7CtWzlL/bBtGNyb8rYxe0YgkG+l+tgVHvq/Jq7F+JMyaN0L59GffFrDy36pPWUy4Gg=@vger.kernel.org X-Gm-Message-State: AOJu0Yyrz7JDQ3cDfBRVFCwjDi0r5VT9BsBpGPDqEwULe6JZNG7ZbfXv TkBd9uc1aBajM+4U3vgduVvuAppA18TBDwUSJG078bq7zQKOGodX3ytkQrd8zz33tsbHL3nA3G3 xNw== X-Received: from edra16.prod.google.com ([2002:aa7:d910:0:b0:661:d31:27ba]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:13d5:b0:666:d8bb:85ec with SMTP id 4fb4d7f45d1cf-666d8bb8808mr1653788a12.0.1773682532416; Mon, 16 Mar 2026 10:35:32 -0700 (PDT) Date: Mon, 16 Mar 2026 17:35:25 +0000 In-Reply-To: <20260316-tabba-el2_guard-v1-0-456875a2c6db@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260316-tabba-el2_guard-v1-0-456875a2c6db@google.com> X-Mailer: b4 0.14.3 Message-ID: <20260316-tabba-el2_guard-v1-4-456875a2c6db@google.com> Subject: [PATCH 04/10] KVM: arm64: Use guard(hyp_spinlock) in mm.c From: Fuad Tabba To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon , "KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64" , "KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64" , open list Cc: Fuad Tabba Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Migrate manual hyp_spin_lock() and hyp_spin_unlock() calls managing pkvm_pgd_lock to use the guard(hyp_spinlock) macro. This eliminates manual unlock calls on return paths and simplifies error handling by replacing goto labels with direct returns. Note: hyp_fixblock_lock spans across hyp_fixblock_map/unmap functions, so it retains explicit lock/unlock semantics to avoid RAII violations. Change-Id: I6bb3f4105e95480269e5bf8289d084c8f9981730 Signed-off-by: Fuad Tabba --- arch/arm64/kvm/hyp/nvhe/mm.c | 37 ++++++++++--------------------------- 1 file changed, 10 insertions(+), 27 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/mm.c b/arch/arm64/kvm/hyp/nvhe/mm.c index 218976287d3f..7a15c9fc15e5 100644 --- a/arch/arm64/kvm/hyp/nvhe/mm.c +++ b/arch/arm64/kvm/hyp/nvhe/mm.c @@ -35,13 +35,8 @@ static DEFINE_PER_CPU(struct hyp_fixmap_slot, fixmap_slo= ts); static int __pkvm_create_mappings(unsigned long start, unsigned long size, unsigned long phys, enum kvm_pgtable_prot prot) { - int err; - - hyp_spin_lock(&pkvm_pgd_lock); - err =3D kvm_pgtable_hyp_map(&pkvm_pgtable, start, size, phys, prot); - hyp_spin_unlock(&pkvm_pgd_lock); - - return err; + guard(hyp_spinlock)(&pkvm_pgd_lock); + return kvm_pgtable_hyp_map(&pkvm_pgtable, start, size, phys, prot); } =20 static int __pkvm_alloc_private_va_range(unsigned long start, size_t size) @@ -80,10 +75,9 @@ int pkvm_alloc_private_va_range(size_t size, unsigned lo= ng *haddr) unsigned long addr; int ret; =20 - hyp_spin_lock(&pkvm_pgd_lock); + guard(hyp_spinlock)(&pkvm_pgd_lock); addr =3D __io_map_base; ret =3D __pkvm_alloc_private_va_range(addr, size); - hyp_spin_unlock(&pkvm_pgd_lock); =20 *haddr =3D addr; =20 @@ -137,13 +131,8 @@ int pkvm_create_mappings_locked(void *from, void *to, = enum kvm_pgtable_prot prot =20 int pkvm_create_mappings(void *from, void *to, enum kvm_pgtable_prot prot) { - int ret; - - hyp_spin_lock(&pkvm_pgd_lock); - ret =3D pkvm_create_mappings_locked(from, to, prot); - hyp_spin_unlock(&pkvm_pgd_lock); - - return ret; + guard(hyp_spinlock)(&pkvm_pgd_lock); + return pkvm_create_mappings_locked(from, to, prot); } =20 int hyp_back_vmemmap(phys_addr_t back) @@ -340,22 +329,17 @@ static int create_fixblock(void) if (i >=3D hyp_memblock_nr) return -EINVAL; =20 - hyp_spin_lock(&pkvm_pgd_lock); + guard(hyp_spinlock)(&pkvm_pgd_lock); addr =3D ALIGN(__io_map_base, PMD_SIZE); ret =3D __pkvm_alloc_private_va_range(addr, PMD_SIZE); if (ret) - goto unlock; + return ret; =20 ret =3D kvm_pgtable_hyp_map(&pkvm_pgtable, addr, PMD_SIZE, phys, PAGE_HYP= ); if (ret) - goto unlock; + return ret; =20 - ret =3D kvm_pgtable_walk(&pkvm_pgtable, addr, PMD_SIZE, &walker); - -unlock: - hyp_spin_unlock(&pkvm_pgd_lock); - - return ret; + return kvm_pgtable_walk(&pkvm_pgtable, addr, PMD_SIZE, &walker); #else return 0; #endif @@ -437,7 +421,7 @@ int pkvm_create_stack(phys_addr_t phys, unsigned long *= haddr) size_t size; int ret; =20 - hyp_spin_lock(&pkvm_pgd_lock); + guard(hyp_spinlock)(&pkvm_pgd_lock); =20 prev_base =3D __io_map_base; /* @@ -463,7 +447,6 @@ int pkvm_create_stack(phys_addr_t phys, unsigned long *= haddr) if (ret) __io_map_base =3D prev_base; } - hyp_spin_unlock(&pkvm_pgd_lock); =20 *haddr =3D addr + size; =20 --=20 2.53.0.851.ga537e3e6e9-goog From nobody Tue Apr 7 02:37:15 2026 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5DCA933A9DE for ; Mon, 16 Mar 2026 17:35:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773682537; cv=none; b=etcq4naxBvk8moe1nHaApZDJOsaraUW4nWo7ELBCsorC4MTOaVZFmOcFdjfplV57zt7v/d0C1avVrKElQeTXWpCptpxg0xaARL+hExatYdEWCw08+T9cwasGqGP0UjytNDWt3eC2bTxU93EqAnx1V0CWR51lDVkLwPDXkO6MmA8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773682537; c=relaxed/simple; bh=KKz9fbZcZR1mjkh27Sdq1u2YhVQ1XtsFBQXXVHQrNX0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=VGdZGpTMMeU4NaJOMkhjf3vukqKL1GPFjDDJHI+hy/X+Ja+Lt/ltp/+wApWHdpqVEImv11XMPZf4x1q9I6Bh3Puz9F9aRSPDjOuhzTDy3ot6YhpszOxa189gztl3hFlwrSILGrIbQwzHVDhm41WG/mG4bPVRMjtfltM6ChGuBG8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=cmavofWH; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="cmavofWH" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-48531e6012bso50733175e9.1 for ; Mon, 16 Mar 2026 10:35:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20251104; t=1773682534; x=1774287334; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=8OL7t7B6X78PqYAbrBfjg/doY6RGS0yWRoav+K4LS08=; b=cmavofWH3Tbhf3A90p2nzfqtX0PFMECi+fk91d0Ya37llXKYAw/Dr8NHKcdC0EFvDL 2U7GztXP3Jp5bZ/o+52WxqIrJbWSQeHnN6DwuD1fSkf9V+MvZlkgO+siTrslDdLUYuiM 9WRNuM+IoobUJRLSNGIhs17oKEG04KTx/K0+0EZdLdr5ZccJTtYA4xeJrLe2nd52Mlae EQkOIDFaWN5huZYDXA/+zTOdrDfYP9AhcN1ELDDaY+7juFAY2MW9yU9J6QPTlXseC0PR n0bjm8+RiNjsGadha8HgxL8ACRBsYjkuVBunr+W6fkfMtgLJHAczvSM1AHaB06/mU8Kl MWbg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1773682534; x=1774287334; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=8OL7t7B6X78PqYAbrBfjg/doY6RGS0yWRoav+K4LS08=; b=EeDoa+om4qe/gZce5RdPisNzfN0q+I5SMI1DAMPpfePLGFLGeDjE0wVEwShjPOIRNa 9uZjqycG/+B5Zle4aMLl+7MtdeTi/V9v4g8YU3DgzHEtGat6UObWXyEl5lU05eCBui1O yQkf21k/oG6i3yNT59dAFmjnrofYdwVY9j6uuT2v7xH98jOwMIYjCkqhmBxz/pWa3MAt mJnbtp7q/R9w0QSXP47D4RAXZgnHxcPR6V4XMVxTcL9usYk3xyjUdAUZ+ip0yE8L3iIi TJinrpWy6wDoAhed9sYfBlUzGzcFJPBDtw9cB/bd/VIek5a+HqZtTlVHlmgkgWBF6bCi n7Pw== X-Forwarded-Encrypted: i=1; AJvYcCW8I36BYNMW8LrcRhGbVKvuA2+ZoDZ5XMb1znoer/A0Le64uFEwopwsB2n2dmSx7bW5BGGSty3KHKtDcec=@vger.kernel.org X-Gm-Message-State: AOJu0YyXIysm7A13xMBJBoRaaSoQK0sAdtyWx5QZeoTpceLjYq6d4ZOB xTH0wqwgkwAl/+NH+T9M9T5n4dLWtsw8cIQn6bbZNnw4wAbjHHwkXRJotQZpXdx2iN0TkjsnDOK Kfg== X-Received: from wmbf24.prod.google.com ([2002:a05:600c:5958:b0:47e:e922:b080]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:45d3:b0:480:1e40:3d2 with SMTP id 5b1f17b1804b1-48556705301mr238355675e9.29.1773682533729; Mon, 16 Mar 2026 10:35:33 -0700 (PDT) Date: Mon, 16 Mar 2026 17:35:26 +0000 In-Reply-To: <20260316-tabba-el2_guard-v1-0-456875a2c6db@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260316-tabba-el2_guard-v1-0-456875a2c6db@google.com> X-Mailer: b4 0.14.3 Message-ID: <20260316-tabba-el2_guard-v1-5-456875a2c6db@google.com> Subject: [PATCH 05/10] KVM: arm64: Use guard(hyp_spinlock) in pkvm.c From: Fuad Tabba To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon , "KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64" , "KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64" , open list Cc: Fuad Tabba Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Migrate manual hyp_spin_lock() and hyp_spin_unlock() calls managing the global vm_table_lock to use the guard(hyp_spinlock) macro. This significantly cleans up validation and error paths during VM creation and manipulation by eliminating the need for goto labels and manual unlock procedures on early returns. Change-Id: I894df69b3cfe053a77dd660dfb70c95640c6d70c Signed-off-by: Fuad Tabba --- arch/arm64/kvm/hyp/nvhe/pkvm.c | 122 +++++++++++++++++--------------------= ---- 1 file changed, 51 insertions(+), 71 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index 2f029bfe4755..8f901fdead89 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -253,28 +253,23 @@ struct pkvm_hyp_vcpu *pkvm_load_hyp_vcpu(pkvm_handle_= t handle, if (__this_cpu_read(loaded_hyp_vcpu)) return NULL; =20 - hyp_spin_lock(&vm_table_lock); + guard(hyp_spinlock)(&vm_table_lock); hyp_vm =3D get_vm_by_handle(handle); if (!hyp_vm || hyp_vm->kvm.created_vcpus <=3D vcpu_idx) - goto unlock; + return NULL; =20 hyp_vcpu =3D hyp_vm->vcpus[vcpu_idx]; if (!hyp_vcpu) - goto unlock; + return NULL; =20 /* Ensure vcpu isn't loaded on more than one cpu simultaneously. */ - if (unlikely(hyp_vcpu->loaded_hyp_vcpu)) { - hyp_vcpu =3D NULL; - goto unlock; - } + if (unlikely(hyp_vcpu->loaded_hyp_vcpu)) + return NULL; =20 hyp_vcpu->loaded_hyp_vcpu =3D this_cpu_ptr(&loaded_hyp_vcpu); hyp_page_ref_inc(hyp_virt_to_page(hyp_vm)); -unlock: - hyp_spin_unlock(&vm_table_lock); =20 - if (hyp_vcpu) - __this_cpu_write(loaded_hyp_vcpu, hyp_vcpu); + __this_cpu_write(loaded_hyp_vcpu, hyp_vcpu); return hyp_vcpu; } =20 @@ -282,11 +277,10 @@ void pkvm_put_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu) { struct pkvm_hyp_vm *hyp_vm =3D pkvm_hyp_vcpu_to_hyp_vm(hyp_vcpu); =20 - hyp_spin_lock(&vm_table_lock); + guard(hyp_spinlock)(&vm_table_lock); hyp_vcpu->loaded_hyp_vcpu =3D NULL; __this_cpu_write(loaded_hyp_vcpu, NULL); hyp_page_ref_dec(hyp_virt_to_page(hyp_vm)); - hyp_spin_unlock(&vm_table_lock); } =20 struct pkvm_hyp_vcpu *pkvm_get_loaded_hyp_vcpu(void) @@ -299,20 +293,18 @@ struct pkvm_hyp_vm *get_pkvm_hyp_vm(pkvm_handle_t han= dle) { struct pkvm_hyp_vm *hyp_vm; =20 - hyp_spin_lock(&vm_table_lock); + guard(hyp_spinlock)(&vm_table_lock); hyp_vm =3D get_vm_by_handle(handle); if (hyp_vm) hyp_page_ref_inc(hyp_virt_to_page(hyp_vm)); - hyp_spin_unlock(&vm_table_lock); =20 return hyp_vm; } =20 void put_pkvm_hyp_vm(struct pkvm_hyp_vm *hyp_vm) { - hyp_spin_lock(&vm_table_lock); + guard(hyp_spinlock)(&vm_table_lock); hyp_page_ref_dec(hyp_virt_to_page(hyp_vm)); - hyp_spin_unlock(&vm_table_lock); } =20 struct pkvm_hyp_vm *get_np_pkvm_hyp_vm(pkvm_handle_t handle) @@ -613,9 +605,8 @@ static int insert_vm_table_entry(pkvm_handle_t handle, { int ret; =20 - hyp_spin_lock(&vm_table_lock); + guard(hyp_spinlock)(&vm_table_lock); ret =3D __insert_vm_table_entry(handle, hyp_vm); - hyp_spin_unlock(&vm_table_lock); =20 return ret; } @@ -692,9 +683,8 @@ int __pkvm_reserve_vm(void) { int ret; =20 - hyp_spin_lock(&vm_table_lock); + guard(hyp_spinlock)(&vm_table_lock); ret =3D allocate_vm_table_entry(); - hyp_spin_unlock(&vm_table_lock); =20 if (ret < 0) return ret; @@ -713,10 +703,9 @@ void __pkvm_unreserve_vm(pkvm_handle_t handle) if (unlikely(!vm_table)) return; =20 - hyp_spin_lock(&vm_table_lock); + guard(hyp_spinlock)(&vm_table_lock); if (likely(idx < KVM_MAX_PVMS && vm_table[idx] =3D=3D RESERVED_ENTRY)) remove_vm_table_entry(handle); - hyp_spin_unlock(&vm_table_lock); } =20 /* @@ -815,35 +804,35 @@ int __pkvm_init_vcpu(pkvm_handle_t handle, struct kvm= _vcpu *host_vcpu, if (!hyp_vcpu) return -ENOMEM; =20 - hyp_spin_lock(&vm_table_lock); + scoped_guard(hyp_spinlock, &vm_table_lock) { + hyp_vm =3D get_vm_by_handle(handle); + if (!hyp_vm) { + ret =3D -ENOENT; + goto err_unmap; + } =20 - hyp_vm =3D get_vm_by_handle(handle); - if (!hyp_vm) { - ret =3D -ENOENT; - goto unlock; + ret =3D init_pkvm_hyp_vcpu(hyp_vcpu, hyp_vm, host_vcpu); + if (ret) + goto err_unmap; + + idx =3D hyp_vcpu->vcpu.vcpu_idx; + if (idx >=3D hyp_vm->kvm.created_vcpus) { + ret =3D -EINVAL; + goto err_unmap; + } + + if (hyp_vm->vcpus[idx]) { + ret =3D -EINVAL; + goto err_unmap; + } + + hyp_vm->vcpus[idx] =3D hyp_vcpu; + + return 0; } =20 - ret =3D init_pkvm_hyp_vcpu(hyp_vcpu, hyp_vm, host_vcpu); - if (ret) - goto unlock; - - idx =3D hyp_vcpu->vcpu.vcpu_idx; - if (idx >=3D hyp_vm->kvm.created_vcpus) { - ret =3D -EINVAL; - goto unlock; - } - - if (hyp_vm->vcpus[idx]) { - ret =3D -EINVAL; - goto unlock; - } - - hyp_vm->vcpus[idx] =3D hyp_vcpu; -unlock: - hyp_spin_unlock(&vm_table_lock); - - if (ret) - unmap_donated_memory(hyp_vcpu, sizeof(*hyp_vcpu)); +err_unmap: + unmap_donated_memory(hyp_vcpu, sizeof(*hyp_vcpu)); return ret; } =20 @@ -866,27 +855,22 @@ int __pkvm_teardown_vm(pkvm_handle_t handle) struct kvm *host_kvm; unsigned int idx; size_t vm_size; - int err; =20 - hyp_spin_lock(&vm_table_lock); - hyp_vm =3D get_vm_by_handle(handle); - if (!hyp_vm) { - err =3D -ENOENT; - goto err_unlock; + scoped_guard(hyp_spinlock, &vm_table_lock) { + hyp_vm =3D get_vm_by_handle(handle); + if (!hyp_vm) + return -ENOENT; + + if (WARN_ON(hyp_page_count(hyp_vm))) + return -EBUSY; + + host_kvm =3D hyp_vm->host_kvm; + + /* Ensure the VMID is clean before it can be reallocated */ + __kvm_tlb_flush_vmid(&hyp_vm->kvm.arch.mmu); + remove_vm_table_entry(handle); } =20 - if (WARN_ON(hyp_page_count(hyp_vm))) { - err =3D -EBUSY; - goto err_unlock; - } - - host_kvm =3D hyp_vm->host_kvm; - - /* Ensure the VMID is clean before it can be reallocated */ - __kvm_tlb_flush_vmid(&hyp_vm->kvm.arch.mmu); - remove_vm_table_entry(handle); - hyp_spin_unlock(&vm_table_lock); - /* Reclaim guest pages (including page-table pages) */ mc =3D &host_kvm->arch.pkvm.teardown_mc; stage2_mc =3D &host_kvm->arch.pkvm.stage2_teardown_mc; @@ -917,8 +901,4 @@ int __pkvm_teardown_vm(pkvm_handle_t handle) teardown_donated_memory(mc, hyp_vm, vm_size); hyp_unpin_shared_mem(host_kvm, host_kvm + 1); return 0; - -err_unlock: - hyp_spin_unlock(&vm_table_lock); - return err; } --=20 2.53.0.851.ga537e3e6e9-goog From nobody Tue Apr 7 02:37:15 2026 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 25A363D3301 for ; Mon, 16 Mar 2026 17:35:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773682541; cv=none; b=KHkwoPo2cCSoW2oxZNHjmGnGIJCJJkBSQCTRcK7Mq/KkuLz6Rlx61+1k9PG9jI3c5gzxgIIpASfDWOH1QsVAlHcyU4PoaDzm5ocYZPJ9lmB0ZQELGV4tWP9o5F/LwZGOShaFJZf+GEGC01CRqdJ5PTWaW09/aUlZM5byDb6EaDg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773682541; c=relaxed/simple; bh=iv21fNrm1yVCcrQj6JMF7U6IPCudWrvUh7tyZZVzSWQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=kDq6ePaoPFHw+X3eGBWujm8i0y3bxDFU00L7rSOXKuJjCG5ZSXndsutAOdbexT15GN7VdMXQyFPEmGqkNzKNJWaZH0XX23teLaiXCvSHLDAJlBniTcXaqofZg1cjQiUChlX3bqRpk8NSZ8ziNffZl/CN2JT7OlfEmqkl3/L/C8k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=nkUFzixK; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="nkUFzixK" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-4853b5b0fafso59890075e9.3 for ; Mon, 16 Mar 2026 10:35:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20251104; t=1773682535; x=1774287335; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=CpR3ACqU07aVtibziRIGQBdLLRT1jdd+Sta0uM61LtQ=; b=nkUFzixKcMelV+r/hR34hfGWv5IEvkdC1u+91OcoQIPpb5FFQ8BczvYGL8/32QwyKQ IRja5wDCYkDSPjpIgmDRZzG/fRQlmUQEr+8CGBr8MCFp0KYHOn9KmsGA30mVH+c04WlK 9Ko01b0hhZHlQE0BlM/A2rgZ8qv/Q8u/BhCVot5E0ZGgnR11p2oCX6d0RWmwMclD7me3 FkwHq4jBC964z/yuW1+XjWyzv7p7eneJ5tDdTtjQWrOIUQzA74rXfYEXC9krExAJfp7n xcF1mK2OLhWW05SiJ6ZGjPV5eT5pdnKcIKeGLyIe/ln8JUvIUKVvW5X8rzSptGfEn8jB 0XAA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1773682535; x=1774287335; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=CpR3ACqU07aVtibziRIGQBdLLRT1jdd+Sta0uM61LtQ=; b=WTEvNPrAHwC1luarNGmDn3jblH28nBGVkrHYPEdV8rEcBmKq2xQJ/1EVW1IQEtc9ZN 4cSy5nu4M2pVFD8JJbcaeyBVl/NjZRBiXt+jX5q/Wyf23JlA0aXQWyUe6y0AaVUX9Rwc NZ2nhwCdID5bpKwBLXnbNpQn0qSIA8ECMjrkQQxWLwbyS0DMuMT+0E619xqQdHyt+BxL u+NOBzytlupG5fDNX31BYTjDNN1aSjj2HKG8YyEoUmGoNY/3pTQ0SO9tR9QtRc/lkVhB XUcsMUq0aPB66pIech8Ehz5w2VPRSaWNx/Ol/0g+4UiFIRpKw3wR6QJnZhWllQZrTtn4 CyKw== X-Forwarded-Encrypted: i=1; AJvYcCW7nrCNYLDNMfNatKJ9O1d3nVeZ48jZfso6Gj0YqC2zzQc4q1bQMi4ewalxnYsydvuOCvXXwDWYmZa97ag=@vger.kernel.org X-Gm-Message-State: AOJu0YwXhNLPC7fVyTcfsRwlY1JrGG6jnVwGmqf5EYF2z7eVYUC+eC65 F657oIxjRlAL7mHTY8rctbxVprdgRSW6JJ1nXoHEODrapdOEvvAs78w1Bs4sKxulGCA3f7n+pU7 XiA== X-Received: from wmsl29.prod.google.com ([2002:a05:600c:1d1d:b0:485:3659:1508]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:a45:b0:485:4006:960c with SMTP id 5b1f17b1804b1-48556702a7dmr250396775e9.16.1773682534732; Mon, 16 Mar 2026 10:35:34 -0700 (PDT) Date: Mon, 16 Mar 2026 17:35:27 +0000 In-Reply-To: <20260316-tabba-el2_guard-v1-0-456875a2c6db@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260316-tabba-el2_guard-v1-0-456875a2c6db@google.com> X-Mailer: b4 0.14.3 Message-ID: <20260316-tabba-el2_guard-v1-6-456875a2c6db@google.com> Subject: [PATCH 06/10] KVM: arm64: Use guard(mutex) in mmu.c From: Fuad Tabba To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon , "KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64" , "KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64" , open list Cc: Fuad Tabba Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Migrate manual mutex_lock() and mutex_unlock() calls managing kvm_hyp_pgd_mutex and hyp_shared_pfns_lock to use the guard(mutex) macro. This eliminates manual unlock calls on return paths and simplifies error handling by replacing unlock goto labels with direct returns. Centralized cleanup goto paths are preserved with manual unlocks removed. Change-Id: Ib0f33a474eb84f19da4de0858c77751bbe55dfbb Signed-off-by: Fuad Tabba --- arch/arm64/kvm/mmu.c | 95 ++++++++++++++++++++----------------------------= ---- 1 file changed, 36 insertions(+), 59 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index ec2eee857208..05f1cf839c9e 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -388,13 +388,12 @@ static void stage2_flush_vm(struct kvm *kvm) */ void __init free_hyp_pgds(void) { - mutex_lock(&kvm_hyp_pgd_mutex); + guard(mutex)(&kvm_hyp_pgd_mutex); if (hyp_pgtable) { kvm_pgtable_hyp_destroy(hyp_pgtable); kfree(hyp_pgtable); hyp_pgtable =3D NULL; } - mutex_unlock(&kvm_hyp_pgd_mutex); } =20 static bool kvm_host_owns_hyp_mappings(void) @@ -421,16 +420,11 @@ static bool kvm_host_owns_hyp_mappings(void) int __create_hyp_mappings(unsigned long start, unsigned long size, unsigned long phys, enum kvm_pgtable_prot prot) { - int err; - if (WARN_ON(!kvm_host_owns_hyp_mappings())) return -EINVAL; =20 - mutex_lock(&kvm_hyp_pgd_mutex); - err =3D kvm_pgtable_hyp_map(hyp_pgtable, start, size, phys, prot); - mutex_unlock(&kvm_hyp_pgd_mutex); - - return err; + guard(mutex)(&kvm_hyp_pgd_mutex); + return kvm_pgtable_hyp_map(hyp_pgtable, start, size, phys, prot); } =20 static phys_addr_t kvm_kaddr_to_phys(void *kaddr) @@ -478,56 +472,42 @@ static int share_pfn_hyp(u64 pfn) { struct rb_node **node, *parent; struct hyp_shared_pfn *this; - int ret =3D 0; =20 - mutex_lock(&hyp_shared_pfns_lock); + guard(mutex)(&hyp_shared_pfns_lock); this =3D find_shared_pfn(pfn, &node, &parent); if (this) { this->count++; - goto unlock; + return 0; } =20 this =3D kzalloc_obj(*this); - if (!this) { - ret =3D -ENOMEM; - goto unlock; - } + if (!this) + return -ENOMEM; =20 this->pfn =3D pfn; this->count =3D 1; rb_link_node(&this->node, parent, node); rb_insert_color(&this->node, &hyp_shared_pfns); - ret =3D kvm_call_hyp_nvhe(__pkvm_host_share_hyp, pfn); -unlock: - mutex_unlock(&hyp_shared_pfns_lock); - - return ret; + return kvm_call_hyp_nvhe(__pkvm_host_share_hyp, pfn); } =20 static int unshare_pfn_hyp(u64 pfn) { struct rb_node **node, *parent; struct hyp_shared_pfn *this; - int ret =3D 0; =20 - mutex_lock(&hyp_shared_pfns_lock); + guard(mutex)(&hyp_shared_pfns_lock); this =3D find_shared_pfn(pfn, &node, &parent); - if (WARN_ON(!this)) { - ret =3D -ENOENT; - goto unlock; - } + if (WARN_ON(!this)) + return -ENOENT; =20 this->count--; if (this->count) - goto unlock; + return 0; =20 rb_erase(&this->node, &hyp_shared_pfns); kfree(this); - ret =3D kvm_call_hyp_nvhe(__pkvm_host_unshare_hyp, pfn); -unlock: - mutex_unlock(&hyp_shared_pfns_lock); - - return ret; + return kvm_call_hyp_nvhe(__pkvm_host_unshare_hyp, pfn); } =20 int kvm_share_hyp(void *from, void *to) @@ -652,22 +632,20 @@ int hyp_alloc_private_va_range(size_t size, unsigned = long *haddr) unsigned long base; int ret =3D 0; =20 - mutex_lock(&kvm_hyp_pgd_mutex); - - /* - * This assumes that we have enough space below the idmap - * page to allocate our VAs. If not, the check in - * __hyp_alloc_private_va_range() will kick. A potential - * alternative would be to detect that overflow and switch - * to an allocation above the idmap. - * - * The allocated size is always a multiple of PAGE_SIZE. - */ - size =3D PAGE_ALIGN(size); - base =3D io_map_base - size; - ret =3D __hyp_alloc_private_va_range(base); - - mutex_unlock(&kvm_hyp_pgd_mutex); + scoped_guard(mutex, &kvm_hyp_pgd_mutex) { + /* + * This assumes that we have enough space below the idmap + * page to allocate our VAs. If not, the check in + * __hyp_alloc_private_va_range() will kick. A potential + * alternative would be to detect that overflow and switch + * to an allocation above the idmap. + * + * The allocated size is always a multiple of PAGE_SIZE. + */ + size =3D PAGE_ALIGN(size); + base =3D io_map_base - size; + ret =3D __hyp_alloc_private_va_range(base); + } =20 if (!ret) *haddr =3D base; @@ -711,17 +689,16 @@ int create_hyp_stack(phys_addr_t phys_addr, unsigned = long *haddr) size_t size; int ret; =20 - mutex_lock(&kvm_hyp_pgd_mutex); - /* - * Efficient stack verification using the NVHE_STACK_SHIFT bit implies - * an alignment of our allocation on the order of the size. - */ - size =3D NVHE_STACK_SIZE * 2; - base =3D ALIGN_DOWN(io_map_base - size, size); + scoped_guard(mutex, &kvm_hyp_pgd_mutex) { + /* + * Efficient stack verification using the NVHE_STACK_SHIFT bit implies + * an alignment of our allocation on the order of the size. + */ + size =3D NVHE_STACK_SIZE * 2; + base =3D ALIGN_DOWN(io_map_base - size, size); =20 - ret =3D __hyp_alloc_private_va_range(base); - - mutex_unlock(&kvm_hyp_pgd_mutex); + ret =3D __hyp_alloc_private_va_range(base); + } =20 if (ret) { kvm_err("Cannot allocate hyp stack guard page\n"); --=20 2.53.0.851.ga537e3e6e9-goog From nobody Tue Apr 7 02:37:15 2026 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EF21933A9D3 for ; Mon, 16 Mar 2026 17:35:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773682542; cv=none; b=IhstY7mnOroRcH0/lMVAISie3cMxIYC6QI35s+sEkbzb3M15Hmgbc7E0hzhYAd6Nhc4taKrpdTIWxpdYcchVVJl/gY7M/Z1JdKKOK0UOlzr9O0vF2M/l9paZ71SYVD9DGUlSQ+oPLgIrmX9KyD25+xq2cNzF4wm8nKX07FQ8nxs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773682542; c=relaxed/simple; bh=iChkBxKJtcFVZgyq0ui0SMkPSn7JUQJQi+VMec4bY8M=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=sXLQtC5w4V0BlekoyMOsqsQLNDvBI12L7aAPs+8MN/jP03wIPrkX9WLzwM+eAQ3FRGUgrlO+wLe36QFQCrHbIrwxe6n7KtHkqjkMA/CnlO91fxaRU9mtqRt3c7SWuHPs39XjxrRh0YPrxoINIeLjWAt1DU31F8+nDdRSlgh9EqE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=bsHs28H1; arc=none smtp.client-ip=209.85.221.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="bsHs28H1" Received: by mail-wr1-f74.google.com with SMTP id ffacd0b85a97d-439c54e0f6aso3739135f8f.0 for ; Mon, 16 Mar 2026 10:35:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20251104; t=1773682536; x=1774287336; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=2aVMNYqgbLtglfFJK2IfbXVz92dSXKpE2nwANbOIprc=; b=bsHs28H1znGpOUM3WoUga8GXE3AOaEmzlxoFhkLT16b/20O3D9oEYCevZSLSgp0kZ4 MvtxOl1zmw/EbmI///PmObPB/GUe5Pa0xn/qsqnbPuZcPVDc8bDKt5Nq/IIzxfTj4xvP G9g03lwKckV/XEAmYYj1EefNG41qbkqPxBib58IY9ZFDodE7y93qayDvUKWVKUVu7Cj6 G7fHD/widhgzbQEzu2Iuz8yj178tKY/grAiBZCjLYy2vl2LdTi4b7KBxrfL3/4jMcr/f bLuCxL4YrOc2SMFMou8flHrTkt+o7Ilgleqkikn91t5ef8B2oJwzrUT1HUMhPvA5NjDd QL+A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1773682536; x=1774287336; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=2aVMNYqgbLtglfFJK2IfbXVz92dSXKpE2nwANbOIprc=; b=IA5IYvRp8N+24hLQtHEPZqDopUK8bpuHCuM+1wF7LkvSdcXwPJ7Yh9MR2mJKHDwr/C VuVC4kWcK87WEyV2z2IzmhL+ysKr/haKVwZ666AEYsg3sCHiryJTXBwy0VUftLt9KBRH NBNPDfmVo3xr/KZlMWzyAOue1fgZbl0IEs2W7nJrPpIf+uuGp7KfFajiv+BbD5HrWyR5 7psahSbGKe3dKDM9iqHGw9R1qo8Y9ykU1lNYMngvbIgi2Na5Esvkf20gCRcmvQmNFhXk 2jG/KYl5sPqN1DS076NQ+pmrXHZJvn8HOvozjkwm0TaglZxWItw03tHn7TWN9oooOI/4 Jz/g== X-Forwarded-Encrypted: i=1; AJvYcCUhWAy/kzMcIdUEAjTiJcb8rZejKdRXassK4/hn9qDZdtIc9B7OjGkkl4oljrntwFaUUYNKIJLIzmv1Ok0=@vger.kernel.org X-Gm-Message-State: AOJu0YwgVRUlgCrZ44ser+FxhLREkZSs8g2tIw7BKg+xCJK3YpY+dJb9 eB1AAA4W0Uel4kFPx/LrfZoPcoEKdwRBPVALCWzNqUIDXEaAFhrQTlHVzWfbwgVGvSMTrhjqPi1 7dg== X-Received: from wrbfu19.prod.google.com ([2002:a05:6000:25f3:b0:43b:458d:a311]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a5d:5d09:0:b0:439:c153:ae3d with SMTP id ffacd0b85a97d-43b498127f3mr1058497f8f.6.1773682536058; Mon, 16 Mar 2026 10:35:36 -0700 (PDT) Date: Mon, 16 Mar 2026 17:35:28 +0000 In-Reply-To: <20260316-tabba-el2_guard-v1-0-456875a2c6db@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260316-tabba-el2_guard-v1-0-456875a2c6db@google.com> X-Mailer: b4 0.14.3 Message-ID: <20260316-tabba-el2_guard-v1-7-456875a2c6db@google.com> Subject: [PATCH 07/10] KVM: arm64: Use scoped resource management in arm.c From: Fuad Tabba To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon , "KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64" , "KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64" , open list Cc: Fuad Tabba Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Migrate manual spin_lock() calls managing mp_state_lock and manual mutex_lock() calls managing kvm->arch.config_lock to use the guard(spinlock) and guard(mutex) macros. This eliminates manual unlock calls on early return paths and simplifies the vCPU suspend/resume control flow. Change-Id: Ifcd8455d08afa5d00fc200daaa3fb13f6736e6ed Signed-off-by: Fuad Tabba --- arch/arm64/kvm/arm.c | 53 ++++++++++++++++++++----------------------------= ---- 1 file changed, 20 insertions(+), 33 deletions(-) diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 410ffd41fd73..017f5bfabe19 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -274,17 +274,15 @@ static void kvm_destroy_mpidr_data(struct kvm *kvm) { struct kvm_mpidr_data *data; =20 - mutex_lock(&kvm->arch.config_lock); - - data =3D rcu_dereference_protected(kvm->arch.mpidr_data, - lockdep_is_held(&kvm->arch.config_lock)); - if (data) { - rcu_assign_pointer(kvm->arch.mpidr_data, NULL); - synchronize_rcu(); - kfree(data); + scoped_guard(mutex, &kvm->arch.config_lock) { + data =3D rcu_dereference_protected(kvm->arch.mpidr_data, + lockdep_is_held(&kvm->arch.config_lock)); + if (data) { + rcu_assign_pointer(kvm->arch.mpidr_data, NULL); + synchronize_rcu(); + kfree(data); + } } - - mutex_unlock(&kvm->arch.config_lock); } =20 /** @@ -738,9 +736,8 @@ static void __kvm_arm_vcpu_power_off(struct kvm_vcpu *v= cpu) =20 void kvm_arm_vcpu_power_off(struct kvm_vcpu *vcpu) { - spin_lock(&vcpu->arch.mp_state_lock); + guard(spinlock)(&vcpu->arch.mp_state_lock); __kvm_arm_vcpu_power_off(vcpu); - spin_unlock(&vcpu->arch.mp_state_lock); } =20 bool kvm_arm_vcpu_stopped(struct kvm_vcpu *vcpu) @@ -773,7 +770,7 @@ int kvm_arch_vcpu_ioctl_set_mpstate(struct kvm_vcpu *vc= pu, { int ret =3D 0; =20 - spin_lock(&vcpu->arch.mp_state_lock); + guard(spinlock)(&vcpu->arch.mp_state_lock); =20 switch (mp_state->mp_state) { case KVM_MP_STATE_RUNNABLE: @@ -789,8 +786,6 @@ int kvm_arch_vcpu_ioctl_set_mpstate(struct kvm_vcpu *vc= pu, ret =3D -EINVAL; } =20 - spin_unlock(&vcpu->arch.mp_state_lock); - return ret; } =20 @@ -828,11 +823,11 @@ static void kvm_init_mpidr_data(struct kvm *kvm) u64 aff_set =3D 0, aff_clr =3D ~0UL; struct kvm_vcpu *vcpu; =20 - mutex_lock(&kvm->arch.config_lock); + guard(mutex)(&kvm->arch.config_lock); =20 if (rcu_access_pointer(kvm->arch.mpidr_data) || atomic_read(&kvm->online_vcpus) =3D=3D 1) - goto out; + return; =20 kvm_for_each_vcpu(c, vcpu, kvm) { u64 aff =3D kvm_vcpu_get_mpidr_aff(vcpu); @@ -857,7 +852,7 @@ static void kvm_init_mpidr_data(struct kvm *kvm) GFP_KERNEL_ACCOUNT); =20 if (!data) - goto out; + return; =20 data->mpidr_mask =3D mask; =20 @@ -869,8 +864,6 @@ static void kvm_init_mpidr_data(struct kvm *kvm) } =20 rcu_assign_pointer(kvm->arch.mpidr_data, data); -out: - mutex_unlock(&kvm->arch.config_lock); } =20 /* @@ -944,9 +937,8 @@ int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu) return ret; } =20 - mutex_lock(&kvm->arch.config_lock); + guard(mutex)(&kvm->arch.config_lock); set_bit(KVM_ARCH_FLAG_HAS_RAN_ONCE, &kvm->arch.flags); - mutex_unlock(&kvm->arch.config_lock); =20 return ret; } @@ -1585,29 +1577,26 @@ static int __kvm_vcpu_set_target(struct kvm_vcpu *v= cpu, { unsigned long features =3D init->features[0]; struct kvm *kvm =3D vcpu->kvm; - int ret =3D -EINVAL; + int ret; =20 - mutex_lock(&kvm->arch.config_lock); + guard(mutex)(&kvm->arch.config_lock); =20 if (test_bit(KVM_ARCH_FLAG_VCPU_FEATURES_CONFIGURED, &kvm->arch.flags) && kvm_vcpu_init_changed(vcpu, init)) - goto out_unlock; + return -EINVAL; =20 bitmap_copy(kvm->arch.vcpu_features, &features, KVM_VCPU_MAX_FEATURES); =20 ret =3D kvm_setup_vcpu(vcpu); if (ret) - goto out_unlock; + return ret; =20 /* Now we know what it is, we can reset it. */ kvm_reset_vcpu(vcpu); =20 set_bit(KVM_ARCH_FLAG_VCPU_FEATURES_CONFIGURED, &kvm->arch.flags); vcpu_set_flag(vcpu, VCPU_INITIALIZED); - ret =3D 0; -out_unlock: - mutex_unlock(&kvm->arch.config_lock); - return ret; + return 0; } =20 static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu, @@ -1674,15 +1663,13 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm= _vcpu *vcpu, /* * Handle the "start in power-off" case. */ - spin_lock(&vcpu->arch.mp_state_lock); + guard(spinlock)(&vcpu->arch.mp_state_lock); =20 if (power_off) __kvm_arm_vcpu_power_off(vcpu); else WRITE_ONCE(vcpu->arch.mp_state.mp_state, KVM_MP_STATE_RUNNABLE); =20 - spin_unlock(&vcpu->arch.mp_state_lock); - return 0; } =20 --=20 2.53.0.851.ga537e3e6e9-goog From nobody Tue Apr 7 02:37:15 2026 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F21C733859C for ; Mon, 16 Mar 2026 17:35:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773682540; cv=none; b=iAmsSiFMYo7hwW/tYuV6Ef5NHzSjs3V+mW+uPshK3XqJTzUN4CknmS/jHlvjrJgGBFAcFxqAc0N3QNJP5EMO7dKNNOmv23UTyEgYW77/6FISoJTxpMzqoiyG/urZyZpmVKXoElxcr03MHp+WNN27AZjO3CLhBNMBtSNqV0k3YE0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773682540; c=relaxed/simple; bh=yNUIHO2NMw8B5EFVSRGekqdFjiqxXTiNYvgnQBnltj4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=L0nDLZMnCJilmv3PiBDlSOiyUcgmM1JoEpE75zeAkRiXOjk/tt018wc81gMu3SIl8fJ3FGl7Rn9pqurAAjrCYwbTUioJwZl6pc93rqQw9RTS1nVADqKtEiytudNlpQoHG+wPQ9s2A8UDVMohbuBS4TtTCI0vyudhfln8xRDWrAc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=b0YPp7+F; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="b0YPp7+F" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-485390246c8so40739465e9.0 for ; Mon, 16 Mar 2026 10:35:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20251104; t=1773682537; x=1774287337; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=cfzLIsn2Q20JghpCHqLQPcreo5Zv/pRgRX5lZKoPSE8=; b=b0YPp7+Fz/+QNypHercqETVzJ1/o0TJVMN/w+F9LyIA+/6fyrTuzG2XbertUqsfZMV pzvq2yE9iefYRuQiTeSEyAIRPurC/ya1oN0jljltpz0msd/t7w1MtsZ7zEbnAqhBbpua 6HUodWIDa+Qjannu7spONeEVSMqEyL5j5/df12Y3lp4vCEN9A591IGjaKcBRqB1nNDaN j3K+Vk+PEhsuexFwZvnDxTkXMuYvpUo9hvJ3bXSjEHz4zGU/2G6I0FVvDuQidFfMCWi+ rK4yW5eFlA8oTEszvCor3qz+kCxY0/yBWZj57LkJDVKNfB+3WyBRN3P7/982h1DJeFAl zuGg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1773682537; x=1774287337; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=cfzLIsn2Q20JghpCHqLQPcreo5Zv/pRgRX5lZKoPSE8=; b=atNWreTggEGIotpYQVu1jpbOh4zFEqfP+Sd6V2Z69MMSqTUWyVrf8zw/0HsfX6H9XP Plc3c2QcBSY6TDR1FHcy6HogvA1isuFRdLoFcZkf2NC9s+xj8o/qP4cd/SVAD2rFfcyO 7nIEi/tZVKc0kZHeXkNRyhYG3SdPIi7yy+n285aLiEGMhEzsash2C+EBUy0IcqSuLxKo JKvHdG9hyxuQXjTH6Echl8N47fe9PQwP4Uzbt6tqNskWX98uQChCtydCIlUkFDInrk/C veatsmtYn1VeqabFNuWd4YfmtK7HAx0U5JVor28K4ams335zOZ2vHibqbIsH+ZsnFDCy Siog== X-Forwarded-Encrypted: i=1; AJvYcCX5yxBjlltwTLYu7bXpX8IQef1h6IAyodz/Y9sBDeN368FSKbaoctglm/IvPuk+cnl4aK9j5t90V+h+YuA=@vger.kernel.org X-Gm-Message-State: AOJu0YyHlDpLv+YxWKXAA48UgManPZEJLPr8okhgg6XvawkAAA535Rp9 IjxzuJvXCQ2wYOKbR6UfLMMsUaJHZhJ5xoH647dfCgKb5B5aGMXbWHle3eHWKnWKYj42YIdLpBB 6ZA== X-Received: from wmbh3.prod.google.com ([2002:a05:600c:a103:b0:485:343f:9fb3]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:4f54:b0:485:49c5:8eb7 with SMTP id 5b1f17b1804b1-485566fd0eamr230387435e9.22.1773682537346; Mon, 16 Mar 2026 10:35:37 -0700 (PDT) Date: Mon, 16 Mar 2026 17:35:29 +0000 In-Reply-To: <20260316-tabba-el2_guard-v1-0-456875a2c6db@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260316-tabba-el2_guard-v1-0-456875a2c6db@google.com> X-Mailer: b4 0.14.3 Message-ID: <20260316-tabba-el2_guard-v1-8-456875a2c6db@google.com> Subject: [PATCH 08/10] KVM: arm64: Use guard(spinlock) in psci.c From: Fuad Tabba To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon , "KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64" , "KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64" , open list Cc: Fuad Tabba Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Migrate manual spin_lock() and spin_unlock() calls managing the vcpu->arch.mp_state_lock to use the guard(spinlock) macro. This eliminates manual unlock calls on return paths and simplifies error handling during PSCI calls by replacing unlock goto labels with direct returns. Change-Id: Iaf72da18b18aaec8edff91bc30379bed9dd04b2b Signed-off-by: Fuad Tabba --- arch/arm64/kvm/psci.c | 19 +++++++------------ 1 file changed, 7 insertions(+), 12 deletions(-) diff --git a/arch/arm64/kvm/psci.c b/arch/arm64/kvm/psci.c index 3b5dbe9a0a0e..04801f8b596a 100644 --- a/arch/arm64/kvm/psci.c +++ b/arch/arm64/kvm/psci.c @@ -62,7 +62,6 @@ static unsigned long kvm_psci_vcpu_on(struct kvm_vcpu *so= urce_vcpu) struct vcpu_reset_state *reset_state; struct kvm *kvm =3D source_vcpu->kvm; struct kvm_vcpu *vcpu =3D NULL; - int ret =3D PSCI_RET_SUCCESS; unsigned long cpu_id; =20 cpu_id =3D smccc_get_arg1(source_vcpu); @@ -78,14 +77,13 @@ static unsigned long kvm_psci_vcpu_on(struct kvm_vcpu *= source_vcpu) if (!vcpu) return PSCI_RET_INVALID_PARAMS; =20 - spin_lock(&vcpu->arch.mp_state_lock); + guard(spinlock)(&vcpu->arch.mp_state_lock); + if (!kvm_arm_vcpu_stopped(vcpu)) { if (kvm_psci_version(source_vcpu) !=3D KVM_ARM_PSCI_0_1) - ret =3D PSCI_RET_ALREADY_ON; + return PSCI_RET_ALREADY_ON; else - ret =3D PSCI_RET_INVALID_PARAMS; - - goto out_unlock; + return PSCI_RET_INVALID_PARAMS; } =20 reset_state =3D &vcpu->arch.reset_state; @@ -113,9 +111,7 @@ static unsigned long kvm_psci_vcpu_on(struct kvm_vcpu *= source_vcpu) WRITE_ONCE(vcpu->arch.mp_state.mp_state, KVM_MP_STATE_RUNNABLE); kvm_vcpu_wake_up(vcpu); =20 -out_unlock: - spin_unlock(&vcpu->arch.mp_state_lock); - return ret; + return PSCI_RET_SUCCESS; } =20 static unsigned long kvm_psci_vcpu_affinity_info(struct kvm_vcpu *vcpu) @@ -176,9 +172,8 @@ static void kvm_prepare_system_event(struct kvm_vcpu *v= cpu, u32 type, u64 flags) * re-initialized. */ kvm_for_each_vcpu(i, tmp, vcpu->kvm) { - spin_lock(&tmp->arch.mp_state_lock); - WRITE_ONCE(tmp->arch.mp_state.mp_state, KVM_MP_STATE_STOPPED); - spin_unlock(&tmp->arch.mp_state_lock); + scoped_guard(spinlock, &tmp->arch.mp_state_lock) + WRITE_ONCE(tmp->arch.mp_state.mp_state, KVM_MP_STATE_STOPPED); } kvm_make_all_cpus_request(vcpu->kvm, KVM_REQ_SLEEP); =20 --=20 2.53.0.851.ga537e3e6e9-goog From nobody Tue Apr 7 02:37:15 2026 Received: from mail-ej1-f74.google.com (mail-ej1-f74.google.com [209.85.218.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3E8B833ADA9 for ; Mon, 16 Mar 2026 17:35:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773682541; cv=none; b=Erghmyg+l7gYnn3+AVviRnTjsGwIuiNkoo8J0jHXY7LZpffJYd4aPi93kJSobyoDsDezaTZqiFcUP9ch++HjiaDb6FM0hj23vcHzqh1S2OV5lK+YjrtnCJdY+PsEWuocPcpRVmWXLRzXd8HUmR0wxLL6km86aSx7FKBz1pJ2NT8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773682541; c=relaxed/simple; bh=wJHKHXkfOC+zgH0XBj92cvyVS8NAVEQxbDEUxUyBuI4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=j9bw6qiDwAiThGG7hWJN24pptk8w++7pcr7Q/S5DjQ9BIykpxL2Ahhlx3T7hoy1wBG3WLlG+d8Y+H7/U4q2NFYbpEFwPJdQYXzJADvY+ZRV8vy7HemhFiNx5Yg8cJtRua2I8o+FwHGazMSRXbdDOVj1fN8haVFLtVmV7huKFRFg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=fVFxk8x3; arc=none smtp.client-ip=209.85.218.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="fVFxk8x3" Received: by mail-ej1-f74.google.com with SMTP id a640c23a62f3a-b97a96ce5b8so173362166b.1 for ; Mon, 16 Mar 2026 10:35:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20251104; t=1773682539; x=1774287339; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=XusE5Uw2A71FlWXS54s/a8QC7XzEJTYhsYfjXaXNfDg=; b=fVFxk8x3CtxxdGBjt5EZ/xDpXU/bZ61ViZb7ILsX78o9WXrNxlojtV6CgkF/3MGurZ 1Jh1PgInlccARaSI7scCdvhn7KAKmjXxzn/nzdkBpWujquy9FFmmoNiVMeLirktSjPZx 2VoXRBxCDj8K0Pdq7RST6R9SuASG2SV2sTZc3dl/QvRzG/+ji+JG0bwAWd1PGNuIQtJc wK3LL7zcfpeSHloACVFVnNcwNZ/IDavBUSGCJlPeYDUyb4+4t2ivH+WXbg7ClFvQZSGR MlskupnMiIzwwNLULNmgoa6Jyvn5cXm8eIcANlmHG2qcURTPFDUdq6WnfdctbDxZuGk6 Rs1Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1773682539; x=1774287339; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=XusE5Uw2A71FlWXS54s/a8QC7XzEJTYhsYfjXaXNfDg=; b=ZM58UXbTu9M02rnbn7r34g1+8n65ZLRR57DUqSkZ9hnfo7UxVdyube2C280dIeYn3Y PDyNWHHMPejwWbTzynLEIE/wcEy32vq4/JC9/VcRGfG+O6ogciXXNhyGIBUqw20xKVle 4llwgmunrSQzdgBUsmV4ZzC4SNUTC+wOBTtpSniOLqioJWBD1Kc0WBCLuiJVsNRzAD/d +Dfkdp19TVkY97cWiPQ0QNdPnq4K6VAkFYjuSF5HxLB2Q+h+F28CklHeEkth7N6AeYQT ui9e4HmEHjgRwex5h5mVfVEhPOnHWJzpVHECnAoC7eF0VkBACahxXZDhkN3aRGHSFXWo LlXA== X-Forwarded-Encrypted: i=1; AJvYcCVRa6RvuXVX2l9ZIeL0WqfYJ+7b0bY3aUiPHJJ1hqiyUjOKYGON+bQI2zebE30FLI5fEKjIYXewJU+9QFM=@vger.kernel.org X-Gm-Message-State: AOJu0Yx01+dbuaxRGR7Hsivm8OlpzGMLkWzb3Vqx7kcQDWCBHyNEbcWL Pior7RRD03s9mvxPyZRa5+4syV9XKNusM2Vm2BD3cUCwFMhcHhskITfxC2g0+4aSPWrz8mb4riH IKw== X-Received: from ejcm7.prod.google.com ([2002:a17:906:7c7:b0:b8f:93d4:516f]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a17:907:2d07:b0:b97:c313:db30 with SMTP id a640c23a62f3a-b97c313eebamr240677566b.8.1773682538520; Mon, 16 Mar 2026 10:35:38 -0700 (PDT) Date: Mon, 16 Mar 2026 17:35:30 +0000 In-Reply-To: <20260316-tabba-el2_guard-v1-0-456875a2c6db@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260316-tabba-el2_guard-v1-0-456875a2c6db@google.com> X-Mailer: b4 0.14.3 Message-ID: <20260316-tabba-el2_guard-v1-9-456875a2c6db@google.com> Subject: [PATCH 09/10] KVM: arm64: Use guard(spinlock) in reset.c From: Fuad Tabba To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon , "KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64" , "KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64" , open list Cc: Fuad Tabba Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Migrate manual spin_lock() and spin_unlock() calls managing the vcpu->arch.mp_state_lock to use the scoped_guard(spinlock) macro. This streamlines control flow during vCPU resets by utilizing RAII-style automated unlocking. Change-Id: I32e721e67012c4a141f46b220190bf3c28485821 Signed-off-by: Fuad Tabba --- arch/arm64/kvm/reset.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index 959532422d3a..e229c6885c10 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -193,10 +193,10 @@ void kvm_reset_vcpu(struct kvm_vcpu *vcpu) bool loaded; u32 pstate; =20 - spin_lock(&vcpu->arch.mp_state_lock); - reset_state =3D vcpu->arch.reset_state; - vcpu->arch.reset_state.reset =3D false; - spin_unlock(&vcpu->arch.mp_state_lock); + scoped_guard(spinlock, &vcpu->arch.mp_state_lock) { + reset_state =3D vcpu->arch.reset_state; + vcpu->arch.reset_state.reset =3D false; + } =20 preempt_disable(); loaded =3D (vcpu->cpu !=3D -1); --=20 2.53.0.851.ga537e3e6e9-goog From nobody Tue Apr 7 02:37:15 2026 Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 922FE33AD8A for ; Mon, 16 Mar 2026 17:35:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773682543; cv=none; b=fKQfvUcAIedhK3K2AnKrlqBIhL36kMbxNJH6XOZ1G1IfBrMS/FdS3IMvCQCfnmGtvEAG8/fsYQqSldTiSX/FVF9JBWzzNuzZMnOHPteoVhS6YKDzk/xP/2jopJtOmLQv9ln4PGVwRd5j6T7oCvm7vgtBDxfZDGnMx+N5WZDjzcs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773682543; c=relaxed/simple; bh=ZKEWj50MRz8/DTqbCH4vOeWYqo9VuNf8xP835LwJ8AU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=EOsgoMK1FWxosdFvC57QdkNIsZXY+j3I2rUZfmTI9oy+7DnKgieW249LeA+p1VE62osnAqzjzYsfTNJOpdk53pczKsMT1DqR7/w09O4Hv0SIxvdAKUfbsQRFgyg5bCtJ3Ac5PWPfaaZg+1VlEgf81WzFMktxRznKfdZ92OeTKbU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=XR28iHFA; arc=none smtp.client-ip=209.85.221.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="XR28iHFA" Received: by mail-wr1-f73.google.com with SMTP id ffacd0b85a97d-43b41d45be4so1078598f8f.0 for ; Mon, 16 Mar 2026 10:35:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20251104; t=1773682540; x=1774287340; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=B0JuPFNs4DpwbGKwjvMOJ2DY/fNaxhc2eTovF8AJup0=; b=XR28iHFAc6klrIYQ1DLyNpb7QSzAmKgkUFeHB3ji7Yr1tQoJK8I9Le3A3rpnixeFyu RxoSjDI/s2+cnnv75XBh1uSqlPhB0Fv8OeurHAE10xLQH5UhV+RXLrQdC29PZz/a2DCS a6lCC4pmFgyKSchn+KhU5oCSnypBSYhbul6glEL3PDSezIWsECaOctF4VhT5w7SUMLjj R/tATJDmNAmvW1fK0xRmkGL7FCP5zHdKKYZSKOl8ig3Quu8WSlK8to5FZ3PE5RXsxoLN v6uKtzP6BGmESZsEBwzqfuB7VA83wD6EUARIpho9++Btbil2ZKq08y4EicAk1DsxWdl8 l4nQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1773682540; x=1774287340; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=B0JuPFNs4DpwbGKwjvMOJ2DY/fNaxhc2eTovF8AJup0=; b=DWSbG41eBJzhG12qNj+4EAzdenDk6pKbm5735CC4lREwx2B8GHso0/diaFEB+74kUd ychphWpngMu4heY2aHo0z8ZE59IVMVzKKpl2SvisWzdHz6CJfg2KSLXtRVVSuSVWS0gL DsF9DuSY53E+iEBfHeVaH5QQXdxr+9UdRdgMmyV3cMP79uJ7U1ndxhqKQeOy4Qz5fkzV u01D0OsxOEbZWx29RLViMcSD/UXnpCsNC1vOVZ/gaf970NFFo6ZiqO+qCjxhb1Xj9hnh Jme6/Ut6+WUEJKiYmf3btV66gW58nvCRJoMlbaMW/N68FreUgtsasM9SN5eDaeT5Kxbh PC5A== X-Forwarded-Encrypted: i=1; AJvYcCXHsL3ei/2LF9Zr7+rl75yi7D3xKSXtVRnJx9GpaMleWSLGCju3Y/Jk8qykDDAO3kVCUoPbJAXRW1fCce0=@vger.kernel.org X-Gm-Message-State: AOJu0Yyd1dK8ocIT1AZv0EXsditt/NTPNpSge2SLtf8SyxjAm4X9tBF5 /fNbFVXjlROfPd1+GMfTOd6s8bzU7VWOD9RWAS/4Mx7Y0707sX0LyAEFB1bmi2xJNQ/vGJoiY6X 2qw== X-Received: from wmbfl19.prod.google.com ([2002:a05:600c:b93:b0:483:27fd:cf2b]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:3e1a:b0:485:3428:774c with SMTP id 5b1f17b1804b1-4856eab522emr8252635e9.4.1773682539787; Mon, 16 Mar 2026 10:35:39 -0700 (PDT) Date: Mon, 16 Mar 2026 17:35:31 +0000 In-Reply-To: <20260316-tabba-el2_guard-v1-0-456875a2c6db@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260316-tabba-el2_guard-v1-0-456875a2c6db@google.com> X-Mailer: b4 0.14.3 Message-ID: <20260316-tabba-el2_guard-v1-10-456875a2c6db@google.com> Subject: [PATCH 10/10] KVM: arm64: Use guard(mutex) in pkvm.c From: Fuad Tabba To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon , "KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64" , "KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64" , open list Cc: Fuad Tabba Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Migrate manual mutex_lock() and mutex_unlock() calls managing kvm->arch.config_lock to use the guard(mutex) macro. This eliminates manual unlock calls on early return paths, ensuring the mutex is safely released during early pKVM host-side VM initialization. Change-Id: I902ab100f2deb4de7d6fbf0340d4aec30cf49e56 Signed-off-by: Fuad Tabba --- arch/arm64/kvm/pkvm.c | 19 +++++++------------ 1 file changed, 7 insertions(+), 12 deletions(-) diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index d7a0f69a9982..4a4a9d0699c8 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -190,33 +190,28 @@ bool pkvm_hyp_vm_is_created(struct kvm *kvm) =20 int pkvm_create_hyp_vm(struct kvm *kvm) { - int ret =3D 0; + guard(mutex)(&kvm->arch.config_lock); =20 - mutex_lock(&kvm->arch.config_lock); if (!pkvm_hyp_vm_is_created(kvm)) - ret =3D __pkvm_create_hyp_vm(kvm); - mutex_unlock(&kvm->arch.config_lock); + return __pkvm_create_hyp_vm(kvm); =20 - return ret; + return 0; } =20 int pkvm_create_hyp_vcpu(struct kvm_vcpu *vcpu) { - int ret =3D 0; + guard(mutex)(&vcpu->kvm->arch.config_lock); =20 - mutex_lock(&vcpu->kvm->arch.config_lock); if (!vcpu_get_flag(vcpu, VCPU_PKVM_FINALIZED)) - ret =3D __pkvm_create_hyp_vcpu(vcpu); - mutex_unlock(&vcpu->kvm->arch.config_lock); + return __pkvm_create_hyp_vcpu(vcpu); =20 - return ret; + return 0; } =20 void pkvm_destroy_hyp_vm(struct kvm *kvm) { - mutex_lock(&kvm->arch.config_lock); + guard(mutex)(&kvm->arch.config_lock); __pkvm_destroy_hyp_vm(kvm); - mutex_unlock(&kvm->arch.config_lock); } =20 int pkvm_init_host_vm(struct kvm *kvm) --=20 2.53.0.851.ga537e3e6e9-goog