From nobody Tue Apr 7 04:21:19 2026 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 25A363D3301 for ; Mon, 16 Mar 2026 17:35:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773682541; cv=none; b=KHkwoPo2cCSoW2oxZNHjmGnGIJCJJkBSQCTRcK7Mq/KkuLz6Rlx61+1k9PG9jI3c5gzxgIIpASfDWOH1QsVAlHcyU4PoaDzm5ocYZPJ9lmB0ZQELGV4tWP9o5F/LwZGOShaFJZf+GEGC01CRqdJ5PTWaW09/aUlZM5byDb6EaDg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773682541; c=relaxed/simple; bh=iv21fNrm1yVCcrQj6JMF7U6IPCudWrvUh7tyZZVzSWQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=kDq6ePaoPFHw+X3eGBWujm8i0y3bxDFU00L7rSOXKuJjCG5ZSXndsutAOdbexT15GN7VdMXQyFPEmGqkNzKNJWaZH0XX23teLaiXCvSHLDAJlBniTcXaqofZg1cjQiUChlX3bqRpk8NSZ8ziNffZl/CN2JT7OlfEmqkl3/L/C8k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=nkUFzixK; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="nkUFzixK" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-4853b5b0fafso59890075e9.3 for ; Mon, 16 Mar 2026 10:35:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20251104; t=1773682535; x=1774287335; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=CpR3ACqU07aVtibziRIGQBdLLRT1jdd+Sta0uM61LtQ=; b=nkUFzixKcMelV+r/hR34hfGWv5IEvkdC1u+91OcoQIPpb5FFQ8BczvYGL8/32QwyKQ IRja5wDCYkDSPjpIgmDRZzG/fRQlmUQEr+8CGBr8MCFp0KYHOn9KmsGA30mVH+c04WlK 9Ko01b0hhZHlQE0BlM/A2rgZ8qv/Q8u/BhCVot5E0ZGgnR11p2oCX6d0RWmwMclD7me3 FkwHq4jBC964z/yuW1+XjWyzv7p7eneJ5tDdTtjQWrOIUQzA74rXfYEXC9krExAJfp7n xcF1mK2OLhWW05SiJ6ZGjPV5eT5pdnKcIKeGLyIe/ln8JUvIUKVvW5X8rzSptGfEn8jB 0XAA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1773682535; x=1774287335; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=CpR3ACqU07aVtibziRIGQBdLLRT1jdd+Sta0uM61LtQ=; b=WTEvNPrAHwC1luarNGmDn3jblH28nBGVkrHYPEdV8rEcBmKq2xQJ/1EVW1IQEtc9ZN 4cSy5nu4M2pVFD8JJbcaeyBVl/NjZRBiXt+jX5q/Wyf23JlA0aXQWyUe6y0AaVUX9Rwc NZ2nhwCdID5bpKwBLXnbNpQn0qSIA8ECMjrkQQxWLwbyS0DMuMT+0E619xqQdHyt+BxL u+NOBzytlupG5fDNX31BYTjDNN1aSjj2HKG8YyEoUmGoNY/3pTQ0SO9tR9QtRc/lkVhB XUcsMUq0aPB66pIech8Ehz5w2VPRSaWNx/Ol/0g+4UiFIRpKw3wR6QJnZhWllQZrTtn4 CyKw== X-Forwarded-Encrypted: i=1; AJvYcCW7nrCNYLDNMfNatKJ9O1d3nVeZ48jZfso6Gj0YqC2zzQc4q1bQMi4ewalxnYsydvuOCvXXwDWYmZa97ag=@vger.kernel.org X-Gm-Message-State: AOJu0YwXhNLPC7fVyTcfsRwlY1JrGG6jnVwGmqf5EYF2z7eVYUC+eC65 F657oIxjRlAL7mHTY8rctbxVprdgRSW6JJ1nXoHEODrapdOEvvAs78w1Bs4sKxulGCA3f7n+pU7 XiA== X-Received: from wmsl29.prod.google.com ([2002:a05:600c:1d1d:b0:485:3659:1508]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:a45:b0:485:4006:960c with SMTP id 5b1f17b1804b1-48556702a7dmr250396775e9.16.1773682534732; Mon, 16 Mar 2026 10:35:34 -0700 (PDT) Date: Mon, 16 Mar 2026 17:35:27 +0000 In-Reply-To: <20260316-tabba-el2_guard-v1-0-456875a2c6db@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260316-tabba-el2_guard-v1-0-456875a2c6db@google.com> X-Mailer: b4 0.14.3 Message-ID: <20260316-tabba-el2_guard-v1-6-456875a2c6db@google.com> Subject: [PATCH 06/10] KVM: arm64: Use guard(mutex) in mmu.c From: Fuad Tabba To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon , "KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64" , "KERNEL VIRTUAL MACHINE FOR ARM64 KVM/arm64" , open list Cc: Fuad Tabba Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Migrate manual mutex_lock() and mutex_unlock() calls managing kvm_hyp_pgd_mutex and hyp_shared_pfns_lock to use the guard(mutex) macro. This eliminates manual unlock calls on return paths and simplifies error handling by replacing unlock goto labels with direct returns. Centralized cleanup goto paths are preserved with manual unlocks removed. Change-Id: Ib0f33a474eb84f19da4de0858c77751bbe55dfbb Signed-off-by: Fuad Tabba --- arch/arm64/kvm/mmu.c | 95 ++++++++++++++++++++----------------------------= ---- 1 file changed, 36 insertions(+), 59 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index ec2eee857208..05f1cf839c9e 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -388,13 +388,12 @@ static void stage2_flush_vm(struct kvm *kvm) */ void __init free_hyp_pgds(void) { - mutex_lock(&kvm_hyp_pgd_mutex); + guard(mutex)(&kvm_hyp_pgd_mutex); if (hyp_pgtable) { kvm_pgtable_hyp_destroy(hyp_pgtable); kfree(hyp_pgtable); hyp_pgtable =3D NULL; } - mutex_unlock(&kvm_hyp_pgd_mutex); } =20 static bool kvm_host_owns_hyp_mappings(void) @@ -421,16 +420,11 @@ static bool kvm_host_owns_hyp_mappings(void) int __create_hyp_mappings(unsigned long start, unsigned long size, unsigned long phys, enum kvm_pgtable_prot prot) { - int err; - if (WARN_ON(!kvm_host_owns_hyp_mappings())) return -EINVAL; =20 - mutex_lock(&kvm_hyp_pgd_mutex); - err =3D kvm_pgtable_hyp_map(hyp_pgtable, start, size, phys, prot); - mutex_unlock(&kvm_hyp_pgd_mutex); - - return err; + guard(mutex)(&kvm_hyp_pgd_mutex); + return kvm_pgtable_hyp_map(hyp_pgtable, start, size, phys, prot); } =20 static phys_addr_t kvm_kaddr_to_phys(void *kaddr) @@ -478,56 +472,42 @@ static int share_pfn_hyp(u64 pfn) { struct rb_node **node, *parent; struct hyp_shared_pfn *this; - int ret =3D 0; =20 - mutex_lock(&hyp_shared_pfns_lock); + guard(mutex)(&hyp_shared_pfns_lock); this =3D find_shared_pfn(pfn, &node, &parent); if (this) { this->count++; - goto unlock; + return 0; } =20 this =3D kzalloc_obj(*this); - if (!this) { - ret =3D -ENOMEM; - goto unlock; - } + if (!this) + return -ENOMEM; =20 this->pfn =3D pfn; this->count =3D 1; rb_link_node(&this->node, parent, node); rb_insert_color(&this->node, &hyp_shared_pfns); - ret =3D kvm_call_hyp_nvhe(__pkvm_host_share_hyp, pfn); -unlock: - mutex_unlock(&hyp_shared_pfns_lock); - - return ret; + return kvm_call_hyp_nvhe(__pkvm_host_share_hyp, pfn); } =20 static int unshare_pfn_hyp(u64 pfn) { struct rb_node **node, *parent; struct hyp_shared_pfn *this; - int ret =3D 0; =20 - mutex_lock(&hyp_shared_pfns_lock); + guard(mutex)(&hyp_shared_pfns_lock); this =3D find_shared_pfn(pfn, &node, &parent); - if (WARN_ON(!this)) { - ret =3D -ENOENT; - goto unlock; - } + if (WARN_ON(!this)) + return -ENOENT; =20 this->count--; if (this->count) - goto unlock; + return 0; =20 rb_erase(&this->node, &hyp_shared_pfns); kfree(this); - ret =3D kvm_call_hyp_nvhe(__pkvm_host_unshare_hyp, pfn); -unlock: - mutex_unlock(&hyp_shared_pfns_lock); - - return ret; + return kvm_call_hyp_nvhe(__pkvm_host_unshare_hyp, pfn); } =20 int kvm_share_hyp(void *from, void *to) @@ -652,22 +632,20 @@ int hyp_alloc_private_va_range(size_t size, unsigned = long *haddr) unsigned long base; int ret =3D 0; =20 - mutex_lock(&kvm_hyp_pgd_mutex); - - /* - * This assumes that we have enough space below the idmap - * page to allocate our VAs. If not, the check in - * __hyp_alloc_private_va_range() will kick. A potential - * alternative would be to detect that overflow and switch - * to an allocation above the idmap. - * - * The allocated size is always a multiple of PAGE_SIZE. - */ - size =3D PAGE_ALIGN(size); - base =3D io_map_base - size; - ret =3D __hyp_alloc_private_va_range(base); - - mutex_unlock(&kvm_hyp_pgd_mutex); + scoped_guard(mutex, &kvm_hyp_pgd_mutex) { + /* + * This assumes that we have enough space below the idmap + * page to allocate our VAs. If not, the check in + * __hyp_alloc_private_va_range() will kick. A potential + * alternative would be to detect that overflow and switch + * to an allocation above the idmap. + * + * The allocated size is always a multiple of PAGE_SIZE. + */ + size =3D PAGE_ALIGN(size); + base =3D io_map_base - size; + ret =3D __hyp_alloc_private_va_range(base); + } =20 if (!ret) *haddr =3D base; @@ -711,17 +689,16 @@ int create_hyp_stack(phys_addr_t phys_addr, unsigned = long *haddr) size_t size; int ret; =20 - mutex_lock(&kvm_hyp_pgd_mutex); - /* - * Efficient stack verification using the NVHE_STACK_SHIFT bit implies - * an alignment of our allocation on the order of the size. - */ - size =3D NVHE_STACK_SIZE * 2; - base =3D ALIGN_DOWN(io_map_base - size, size); + scoped_guard(mutex, &kvm_hyp_pgd_mutex) { + /* + * Efficient stack verification using the NVHE_STACK_SHIFT bit implies + * an alignment of our allocation on the order of the size. + */ + size =3D NVHE_STACK_SIZE * 2; + base =3D ALIGN_DOWN(io_map_base - size, size); =20 - ret =3D __hyp_alloc_private_va_range(base); - - mutex_unlock(&kvm_hyp_pgd_mutex); + ret =3D __hyp_alloc_private_va_range(base); + } =20 if (ret) { kvm_err("Cannot allocate hyp stack guard page\n"); --=20 2.53.0.851.ga537e3e6e9-goog