From nobody Tue Apr 7 04:21:20 2026 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DD3023B7B76 for ; Mon, 16 Mar 2026 17:35:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773682534; cv=none; b=kfTKZRY1NJXq1oOarVBA5ygT0HVO+RlcPLcAC9fDzLFHE/8JsCL4p4xfifUKQoON3qoN1nQLLK3GJ2iHJCwrRFQa7Zpisa1X4tC19fba4c6YVOVuV8SPRbsd3ViY+lDCk6Ohj9pAqiWvTC0Xi4F2tVrFSTq9H2cf1Ou2wHxjOcc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773682534; c=relaxed/simple; bh=C4HiGUQNxtmsv7hJo1lWD9b+fUYFAtTKIh3xr/U3w90=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=pMZJ17x4KUkcEgciUWSS60VTWaqBczecVMwVlYnZ9dyDxRLdkKAb5siEQ47p2R3dIFjB3bZzrACHlZJsQeeOHG3dki7UmwCVFOAnYcatkC+/rc8nBFUpHSLCO3LagCzhyIsPqvp08yI50s/BEL8Hk1ommB3bDxdeRSb44yjIHI8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=EwhGuaSX; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="EwhGuaSX" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-48535f4d5e1so47240145e9.0 for ; Mon, 16 Mar 2026 10:35:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20251104; t=1773682531; x=1774287331; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=FvueIHWrftpZGm9WwSLw4m+mhIgN4K5gi1wLojgWmOA=; b=EwhGuaSXP7tAO0KZaJSKcj2vm5kwCdkuPz6wZnbu6afJZgh1cdI8t1jIfbDBLRSDa0 BMt86IJCxk0+KJcMifuKtcnaMpaUI3TSCEcyt6ZPwCFlM6kOUFtRstog9Svr4k+HJWmD 45aTfo1qeg2H40PHMzCldOy189WMvj3hZlyypwoJDXrpG/aIHNdFDlgFV1BLrLLUpSb6 dN6qpDVHK6fZnAgbeEg8Rs8SPf9PW2/pp0w9nbX7iOm0dl/8LECW0QsLwOxSiiuaLUqK 2vTZlyx1Nf9T7EHoYEUygO5FhJGvSquWFanRdzE2DYyEOXWrKcAMkax8zGDKdqqtu7xA Qgmw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1773682531; x=1774287331; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=FvueIHWrftpZGm9WwSLw4m+mhIgN4K5gi1wLojgWmOA=; b=c1OvioTY4/PTWGd3jTSE1j3kgmlhIQ3BIULrWSUgEBBZIyxNAMrKYmweFuBBRrh7Uc U1Nu9VmOeKFdHARGSl1+JKiCFTAx/mUvkNu1EwUUeDu/+Q9IcPoF+PjwPhIaWq9naPu9 lNw6hQl4XPXo/51UnHHxj8OMpL+wM1ouZP4g+zeM9SvtiPxV4DIDWfqgU38J2uX2rB2b LfgF+r+rFAr3urjvnqMdRGyw1W746dVPtoae5nghNxjCFCu6S6gXdrZC1VZJt9eEyW1j 98vcgGUuAvWODXfUzFZ6dgwwwhgKminz7MopK788vHpK69Ken7ms9GWq4U/OMei/jKuE yeig== X-Forwarded-Encrypted: i=1; AJvYcCU/eby4bSCXekMKG5ghBLOK52WwFCoYkHCywIw4J6FOWCRUcBzzMfViifRPavhrOpBh7xfrcNPA8P/7HBw=@vger.kernel.org X-Gm-Message-State: AOJu0Yykaxnd4N6zaZTJIfN2K3neQhBuLkLJsHOr3XBUlLFVdDXhDlOY 7HYppJs+5A0LIkwmUT+apo77NLjZoPmg4yNxcTtPcRNk2hOiWxDyNS9D+DPDFPFPgb72rcW3Pkv eNQ== X-Received: from wmim11.prod.google.com ([2002:a7b:cb8b:0:b0:483:5126:2240]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:a4a:b0:477:76bf:e1fb with SMTP id 5b1f17b1804b1-485566f7bd0mr229045355e9.16.1773682531263; Mon, 16 Mar 2026 10:35:31 -0700 (PDT) Date: Mon, 16 Mar 2026 17:35:24 +0000 In-Reply-To: <20260316-tabba-el2_guard-v1-0-456875a2c6db@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260316-tabba-el2_guard-v1-0-456875a2c6db@google.com> X-Mailer: b4 0.14.3 Message-ID: <20260316-tabba-el2_guard-v1-3-456875a2c6db@google.com> Subject: [PATCH 03/10] KVM: arm64: Use guard(hyp_spinlock) in ffa.c From: Fuad Tabba To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon , "KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64" , "KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64" , open list Cc: Fuad Tabba Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Migrate manual hyp_spin_lock() and hyp_spin_unlock() calls managing host_buffers.lock and version_lock to use the guard(hyp_spinlock) macro. This eliminates manual unlock calls on return paths and simplifies error handling by replacing goto labels with direct returns. Change-Id: I52e31c0bed3d2772c800a535af8abdabd81a178b Signed-off-by: Fuad Tabba --- arch/arm64/kvm/hyp/nvhe/ffa.c | 86 +++++++++++++++++++--------------------= ---- 1 file changed, 38 insertions(+), 48 deletions(-) diff --git a/arch/arm64/kvm/hyp/nvhe/ffa.c b/arch/arm64/kvm/hyp/nvhe/ffa.c index 94161ea1cd60..0c772501c3ba 100644 --- a/arch/arm64/kvm/hyp/nvhe/ffa.c +++ b/arch/arm64/kvm/hyp/nvhe/ffa.c @@ -239,6 +239,8 @@ static void do_ffa_rxtx_map(struct arm_smccc_1_2_regs *= res, int ret =3D 0; void *rx_virt, *tx_virt; =20 + guard(hyp_spinlock)(&host_buffers.lock); + if (npages !=3D (KVM_FFA_MBOX_NR_PAGES * PAGE_SIZE) / FFA_PAGE_SIZE) { ret =3D FFA_RET_INVALID_PARAMETERS; goto out; @@ -249,10 +251,9 @@ static void do_ffa_rxtx_map(struct arm_smccc_1_2_regs = *res, goto out; } =20 - hyp_spin_lock(&host_buffers.lock); if (host_buffers.tx) { ret =3D FFA_RET_DENIED; - goto out_unlock; + goto out; } =20 /* @@ -261,7 +262,7 @@ static void do_ffa_rxtx_map(struct arm_smccc_1_2_regs *= res, */ ret =3D ffa_map_hyp_buffers(npages); if (ret) - goto out_unlock; + goto out; =20 ret =3D __pkvm_host_share_hyp(hyp_phys_to_pfn(tx)); if (ret) { @@ -292,8 +293,6 @@ static void do_ffa_rxtx_map(struct arm_smccc_1_2_regs *= res, host_buffers.tx =3D tx_virt; host_buffers.rx =3D rx_virt; =20 -out_unlock: - hyp_spin_unlock(&host_buffers.lock); out: ffa_to_smccc_res(res, ret); return; @@ -306,7 +305,7 @@ static void do_ffa_rxtx_map(struct arm_smccc_1_2_regs *= res, __pkvm_host_unshare_hyp(hyp_phys_to_pfn(tx)); err_unmap: ffa_unmap_hyp_buffers(); - goto out_unlock; + goto out; } =20 static void do_ffa_rxtx_unmap(struct arm_smccc_1_2_regs *res, @@ -315,15 +314,16 @@ static void do_ffa_rxtx_unmap(struct arm_smccc_1_2_re= gs *res, DECLARE_REG(u32, id, ctxt, 1); int ret =3D 0; =20 + guard(hyp_spinlock)(&host_buffers.lock); + if (id !=3D HOST_FFA_ID) { ret =3D FFA_RET_INVALID_PARAMETERS; goto out; } =20 - hyp_spin_lock(&host_buffers.lock); if (!host_buffers.tx) { ret =3D FFA_RET_INVALID_PARAMETERS; - goto out_unlock; + goto out; } =20 hyp_unpin_shared_mem(host_buffers.tx, host_buffers.tx + 1); @@ -336,8 +336,6 @@ static void do_ffa_rxtx_unmap(struct arm_smccc_1_2_regs= *res, =20 ffa_unmap_hyp_buffers(); =20 -out_unlock: - hyp_spin_unlock(&host_buffers.lock); out: ffa_to_smccc_res(res, ret); } @@ -421,15 +419,16 @@ static void do_ffa_mem_frag_tx(struct arm_smccc_1_2_r= egs *res, int ret =3D FFA_RET_INVALID_PARAMETERS; u32 nr_ranges; =20 + guard(hyp_spinlock)(&host_buffers.lock); + if (fraglen > KVM_FFA_MBOX_NR_PAGES * PAGE_SIZE) goto out; =20 if (fraglen % sizeof(*buf)) goto out; =20 - hyp_spin_lock(&host_buffers.lock); if (!host_buffers.tx) - goto out_unlock; + goto out; =20 buf =3D hyp_buffers.tx; memcpy(buf, host_buffers.tx, fraglen); @@ -444,15 +443,13 @@ static void do_ffa_mem_frag_tx(struct arm_smccc_1_2_r= egs *res, */ ffa_mem_reclaim(res, handle_lo, handle_hi, 0); WARN_ON(res->a0 !=3D FFA_SUCCESS); - goto out_unlock; + goto out; } =20 ffa_mem_frag_tx(res, handle_lo, handle_hi, fraglen, endpoint_id); if (res->a0 !=3D FFA_SUCCESS && res->a0 !=3D FFA_MEM_FRAG_RX) WARN_ON(ffa_host_unshare_ranges(buf, nr_ranges)); =20 -out_unlock: - hyp_spin_unlock(&host_buffers.lock); out: if (ret) ffa_to_smccc_res(res, ret); @@ -482,6 +479,8 @@ static void __do_ffa_mem_xfer(const u64 func_id, u32 offset, nr_ranges, checked_offset; int ret =3D 0; =20 + guard(hyp_spinlock)(&host_buffers.lock); + if (addr_mbz || npages_mbz || fraglen > len || fraglen > KVM_FFA_MBOX_NR_PAGES * PAGE_SIZE) { ret =3D FFA_RET_INVALID_PARAMETERS; @@ -494,15 +493,14 @@ static void __do_ffa_mem_xfer(const u64 func_id, goto out; } =20 - hyp_spin_lock(&host_buffers.lock); if (!host_buffers.tx) { ret =3D FFA_RET_INVALID_PARAMETERS; - goto out_unlock; + goto out; } =20 if (len > ffa_desc_buf.len) { ret =3D FFA_RET_NO_MEMORY; - goto out_unlock; + goto out; } =20 buf =3D hyp_buffers.tx; @@ -513,30 +511,30 @@ static void __do_ffa_mem_xfer(const u64 func_id, offset =3D ep_mem_access->composite_off; if (!offset || buf->ep_count !=3D 1 || buf->sender_id !=3D HOST_FFA_ID) { ret =3D FFA_RET_INVALID_PARAMETERS; - goto out_unlock; + goto out; } =20 if (check_add_overflow(offset, sizeof(struct ffa_composite_mem_region), &= checked_offset)) { ret =3D FFA_RET_INVALID_PARAMETERS; - goto out_unlock; + goto out; } =20 if (fraglen < checked_offset) { ret =3D FFA_RET_INVALID_PARAMETERS; - goto out_unlock; + goto out; } =20 reg =3D (void *)buf + offset; nr_ranges =3D ((void *)buf + fraglen) - (void *)reg->constituents; if (nr_ranges % sizeof(reg->constituents[0])) { ret =3D FFA_RET_INVALID_PARAMETERS; - goto out_unlock; + goto out; } =20 nr_ranges /=3D sizeof(reg->constituents[0]); ret =3D ffa_host_share_ranges(reg->constituents, nr_ranges); if (ret) - goto out_unlock; + goto out; =20 ffa_mem_xfer(res, func_id, len, fraglen); if (fraglen !=3D len) { @@ -549,8 +547,6 @@ static void __do_ffa_mem_xfer(const u64 func_id, goto err_unshare; } =20 -out_unlock: - hyp_spin_unlock(&host_buffers.lock); out: if (ret) ffa_to_smccc_res(res, ret); @@ -558,7 +554,7 @@ static void __do_ffa_mem_xfer(const u64 func_id, =20 err_unshare: WARN_ON(ffa_host_unshare_ranges(reg->constituents, nr_ranges)); - goto out_unlock; + goto out; } =20 #define do_ffa_mem_xfer(fid, res, ctxt) \ @@ -583,7 +579,7 @@ static void do_ffa_mem_reclaim(struct arm_smccc_1_2_reg= s *res, =20 handle =3D PACK_HANDLE(handle_lo, handle_hi); =20 - hyp_spin_lock(&host_buffers.lock); + guard(hyp_spinlock)(&host_buffers.lock); =20 buf =3D hyp_buffers.tx; *buf =3D (struct ffa_mem_region) { @@ -594,7 +590,7 @@ static void do_ffa_mem_reclaim(struct arm_smccc_1_2_reg= s *res, ffa_retrieve_req(res, sizeof(*buf)); buf =3D hyp_buffers.rx; if (res->a0 !=3D FFA_MEM_RETRIEVE_RESP) - goto out_unlock; + goto out; =20 len =3D res->a1; fraglen =3D res->a2; @@ -611,13 +607,13 @@ static void do_ffa_mem_reclaim(struct arm_smccc_1_2_r= egs *res, fraglen > KVM_FFA_MBOX_NR_PAGES * PAGE_SIZE)) { ret =3D FFA_RET_ABORTED; ffa_rx_release(res); - goto out_unlock; + goto out; } =20 if (len > ffa_desc_buf.len) { ret =3D FFA_RET_NO_MEMORY; ffa_rx_release(res); - goto out_unlock; + goto out; } =20 buf =3D ffa_desc_buf.buf; @@ -628,7 +624,7 @@ static void do_ffa_mem_reclaim(struct arm_smccc_1_2_reg= s *res, ffa_mem_frag_rx(res, handle_lo, handle_hi, fragoff); if (res->a0 !=3D FFA_MEM_FRAG_TX) { ret =3D FFA_RET_INVALID_PARAMETERS; - goto out_unlock; + goto out; } =20 fraglen =3D res->a3; @@ -638,15 +634,13 @@ static void do_ffa_mem_reclaim(struct arm_smccc_1_2_r= egs *res, =20 ffa_mem_reclaim(res, handle_lo, handle_hi, flags); if (res->a0 !=3D FFA_SUCCESS) - goto out_unlock; + goto out; =20 reg =3D (void *)buf + offset; /* If the SPMD was happy, then we should be too. */ WARN_ON(ffa_host_unshare_ranges(reg->constituents, reg->addr_range_cnt)); -out_unlock: - hyp_spin_unlock(&host_buffers.lock); - +out: if (ret) ffa_to_smccc_res(res, ret); } @@ -774,13 +768,13 @@ static void do_ffa_version(struct arm_smccc_1_2_regs = *res, return; } =20 - hyp_spin_lock(&version_lock); + guard(hyp_spinlock)(&version_lock); if (has_version_negotiated) { if (FFA_MINOR_VERSION(ffa_req_version) < FFA_MINOR_VERSION(hyp_ffa_versi= on)) res->a0 =3D FFA_RET_NOT_SUPPORTED; else res->a0 =3D hyp_ffa_version; - goto unlock; + return; } =20 /* @@ -793,7 +787,7 @@ static void do_ffa_version(struct arm_smccc_1_2_regs *r= es, .a1 =3D ffa_req_version, }, res); if ((s32)res->a0 =3D=3D FFA_RET_NOT_SUPPORTED) - goto unlock; + return; =20 hyp_ffa_version =3D ffa_req_version; } @@ -804,8 +798,6 @@ static void do_ffa_version(struct arm_smccc_1_2_regs *r= es, smp_store_release(&has_version_negotiated, true); res->a0 =3D hyp_ffa_version; } -unlock: - hyp_spin_unlock(&version_lock); } =20 static void do_ffa_part_get(struct arm_smccc_1_2_regs *res, @@ -818,10 +810,10 @@ static void do_ffa_part_get(struct arm_smccc_1_2_regs= *res, DECLARE_REG(u32, flags, ctxt, 5); u32 count, partition_sz, copy_sz; =20 - hyp_spin_lock(&host_buffers.lock); + guard(hyp_spinlock)(&host_buffers.lock); if (!host_buffers.rx) { ffa_to_smccc_res(res, FFA_RET_BUSY); - goto out_unlock; + return; } =20 arm_smccc_1_2_smc(&(struct arm_smccc_1_2_regs) { @@ -834,16 +826,16 @@ static void do_ffa_part_get(struct arm_smccc_1_2_regs= *res, }, res); =20 if (res->a0 !=3D FFA_SUCCESS) - goto out_unlock; + return; =20 count =3D res->a2; if (!count) - goto out_unlock; + return; =20 if (hyp_ffa_version > FFA_VERSION_1_0) { /* Get the number of partitions deployed in the system */ if (flags & 0x1) - goto out_unlock; + return; =20 partition_sz =3D res->a3; } else { @@ -854,12 +846,10 @@ static void do_ffa_part_get(struct arm_smccc_1_2_regs= *res, copy_sz =3D partition_sz * count; if (copy_sz > KVM_FFA_MBOX_NR_PAGES * PAGE_SIZE) { ffa_to_smccc_res(res, FFA_RET_ABORTED); - goto out_unlock; + return; } =20 memcpy(host_buffers.rx, hyp_buffers.rx, copy_sz); -out_unlock: - hyp_spin_unlock(&host_buffers.lock); } =20 bool kvm_host_ffa_handler(struct kvm_cpu_context *host_ctxt, u32 func_id) --=20 2.53.0.851.ga537e3e6e9-goog