From nobody Mon Apr 6 14:09:00 2026 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7573B2C0F69 for ; Thu, 19 Mar 2026 08:59:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773910754; cv=none; b=GDjz9r+ZvChSf+fbo5PDnZqZpp6CDdPCms5HfsuZPnj0exsMennaME58MZ3XR6ujcnmthLorHQGgSZHxX3LCsWbinIcMteri60uYNxgnObyMaN30IszOhdDeTIGQ8JN/hERYkC4khPl6q5y5lyOXn16Kgn4n/GTriirlwOIF2sk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773910754; c=relaxed/simple; bh=5h1YUn0Ex1SaHrdUQcRyIU5eFqwdYf/gPvyfDRgqdHk=; h=Date:Mime-Version:Message-ID:Subject:From:To:Cc:Content-Type; b=FtutRYs7dhZAY26lw2TatEl5atomDYJQjx3cQz7p8c/DevLxLtvJEQ8y6tynsqR+S4mJ+AQGYjWPWwY/qsrqaILR+LZ3J9FXqrMmap1ntF1iCh17us/4/Aq5boZqhbVs6z/Q/+Yl40vVs9OQBAn7nRszZKm7rsHJnVb1LUAvBVs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--hmazur.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=UyUu/wvU; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--hmazur.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="UyUu/wvU" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-48535f4d5e1so6636045e9.0 for ; Thu, 19 Mar 2026 01:59:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20251104; t=1773910751; x=1774515551; darn=vger.kernel.org; h=cc:to:from:subject:message-id:mime-version:date:from:to:cc:subject :date:message-id:reply-to; bh=8r9tlNwMYzq4cthZppbvjBNc7HbpE7plIlyORF+br+M=; b=UyUu/wvUTxoccLxsePCHVJgsLV8hiEOJCPu41+rvPAAqkJdfkT3Xv0v/BqPvjtSOEc MgDMBpABj/J2YHX0ymYUDxpGNUd8K/wCIlRh8Z+ztth5Cw6eWjOrZC4+fHk5UbwwL0s4 DAVYsyvNtkVsknuK+VwKUzgAwytnCo9fODnmGDmLKlSQ6nu/H4GZpqKJ9uQ5ZY+LX2ab tyXvFBrMLA/1mheqtWjfJAICMu8aIoSsEv4OUUVlitkHdrJJFZajzf8MrSOzz6+CtUqv 0Xb+M5PZE65LfwxLLsA6nhyty+1Xhla/ZZVMEERr/RUT794cRjcKeoGXN5T/WNVVlzV3 F4EA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1773910751; x=1774515551; h=cc:to:from:subject:message-id:mime-version:date:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=8r9tlNwMYzq4cthZppbvjBNc7HbpE7plIlyORF+br+M=; b=JX45luu74mLyMLbHe6ZFC45Cm8dcbsKoyzA+u93aOcvt7O/rYjzrfBOvnUeMXsAqnR 1kr0Vpa+HkLfmUoLhImrev+JMzT5Bfuk7gqQluY5FllanRlHli8ms4RvbH3EK73/Jhlp cEjvnsJe29DFAtfEmZ3jqKZNbNYi5+VkCyexArsRnamDki1McSg9Nu3qKQm3PkSmgLa5 wKXpBykUHY9mywHXQfPNTRKF1rni8jH+u3iAfT7erCZJLx2Ujev91lCoWErdRO0PE5xm FZjCRJ8zydKYgIKnVb5BcWlq/vhvlJj5H9oVnliYtQx9A7sX9KGYlDG8ONkxgTw3Sl3k U9nA== X-Forwarded-Encrypted: i=1; AJvYcCWJLWlJ5z0BKJVhVS7IhRqWxl7E60x7LRvRuN1Do4U3W3LmToMzkGyXL2kKYxJ/FF+EZSf/UJffpiXcsQE=@vger.kernel.org X-Gm-Message-State: AOJu0YznxV+onl6knQJdd/g23dT5+cySIQytRztOZ3VNkTA8X+72IMFT oaql9HuBknNTjveSfYdjhZG8mWWUnSXhlJRucKEmTwC0FfAfMqERO75Z1TbwFRketubMzh1osjJ 43cNxMg== X-Received: from wmok20.prod.google.com ([2002:a05:600c:4794:b0:485:3ddc:f27c]) (user=hmazur job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:4e09:b0:477:76bf:e1fb with SMTP id 5b1f17b1804b1-486f4443e8fmr93435235e9.16.1773910750524; Thu, 19 Mar 2026 01:59:10 -0700 (PDT) Date: Thu, 19 Mar 2026 08:59:07 +0000 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 X-Mailer: git-send-email 2.53.0.983.g0bb29b3bc5-goog Message-ID: <20260319085907.3510446-1-hmazur@google.com> Subject: [PATCH v3] mm/execmem: Make the populate and alloc atomic From: Hubert Mazur To: Andrew Morton , Mike Rapoport Cc: Greg Kroah-Hartman , Stanislaw Kardach , Michal Krawczyk , Slawomir Rosek , Lukasz Majczak , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Hubert Mazur Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" When a memory block is requested from the execmem manager, it tries to find a suitable fragment in the free_areas. In case there is no such block, a new memory area is added to free_areas and then allocated to the caller. Those two operations must be atomic to ensure that no other memory request consumes new block. Signed-off-by: Hubert Mazur --- Changes in v3: - Addressed the maintainer comments regarding style issues - Removed unnecessary conditional statement Changes in v2: The __execmem_cache_alloc_locked function (lockless version of __execmem_cache_alloc) is introduced and called after execmem_cache_add_locked from the __execmem_cache_populate_alloc function (renamed from execmem_cache_populate). Both calls are guarded now with a single mutex. Link to v2: https://lore.kernel.org/all/20260317125020.1293472-2-hmazur@google.com/ Changes in v1: Allocate new memory fragment and assign it directly to the busy_areas inside execmem_cache_populate function. Link to v1: https://lore.kernel.org/all/20260312131438.361746-1-hmazur@google.com/T/#t mm/execmem.c | 55 +++++++++++++++++++++++++++------------------------- 1 file changed, 29 insertions(+), 26 deletions(-) diff --git a/mm/execmem.c b/mm/execmem.c index 810a4ba9c924..4477bb9209ab 100644 --- a/mm/execmem.c +++ b/mm/execmem.c @@ -203,13 +203,6 @@ static int execmem_cache_add_locked(void *ptr, size_t = size, gfp_t gfp_mask) return mas_store_gfp(&mas, (void *)lower, gfp_mask); } =20 -static int execmem_cache_add(void *ptr, size_t size, gfp_t gfp_mask) -{ - guard(mutex)(&execmem_cache.mutex); - - return execmem_cache_add_locked(ptr, size, gfp_mask); -} - static bool within_range(struct execmem_range *range, struct ma_state *mas, size_t size) { @@ -225,18 +218,16 @@ static bool within_range(struct execmem_range *range,= struct ma_state *mas, return false; } =20 -static void *__execmem_cache_alloc(struct execmem_range *range, size_t siz= e) +static void *execmem_cache_alloc_locked(struct execmem_range *range, size_= t size) { struct maple_tree *free_areas =3D &execmem_cache.free_areas; struct maple_tree *busy_areas =3D &execmem_cache.busy_areas; MA_STATE(mas_free, free_areas, 0, ULONG_MAX); MA_STATE(mas_busy, busy_areas, 0, ULONG_MAX); - struct mutex *mutex =3D &execmem_cache.mutex; unsigned long addr, last, area_size =3D 0; void *area, *ptr =3D NULL; int err; =20 - mutex_lock(mutex); mas_for_each(&mas_free, area, ULONG_MAX) { area_size =3D mas_range_len(&mas_free); =20 @@ -245,7 +236,7 @@ static void *__execmem_cache_alloc(struct execmem_range= *range, size_t size) } =20 if (area_size < size) - goto out_unlock; + return NULL; =20 addr =3D mas_free.index; last =3D mas_free.last; @@ -254,7 +245,7 @@ static void *__execmem_cache_alloc(struct execmem_range= *range, size_t size) mas_set_range(&mas_busy, addr, addr + size - 1); err =3D mas_store_gfp(&mas_busy, (void *)addr, GFP_KERNEL); if (err) - goto out_unlock; + return NULL; =20 mas_store_gfp(&mas_free, NULL, GFP_KERNEL); if (area_size > size) { @@ -268,19 +259,25 @@ static void *__execmem_cache_alloc(struct execmem_ran= ge *range, size_t size) err =3D mas_store_gfp(&mas_free, ptr, GFP_KERNEL); if (err) { mas_store_gfp(&mas_busy, NULL, GFP_KERNEL); - goto out_unlock; + return NULL; } } ptr =3D (void *)addr; =20 -out_unlock: - mutex_unlock(mutex); return ptr; } =20 -static int execmem_cache_populate(struct execmem_range *range, size_t size) +static void *__execmem_cache_alloc(struct execmem_range *range, size_t siz= e) +{ + guard(mutex)(&execmem_cache.mutex); + + return execmem_cache_alloc_locked(range, size); +} + +static void *execmem_cache_populate_alloc(struct execmem_range *range, siz= e_t size) { unsigned long vm_flags =3D VM_ALLOW_HUGE_VMAP; + struct mutex *mutex =3D &execmem_cache.mutex; struct vm_struct *vm; size_t alloc_size; int err =3D -ENOMEM; @@ -294,7 +291,7 @@ static int execmem_cache_populate(struct execmem_range = *range, size_t size) } =20 if (!p) - return err; + return NULL; =20 vm =3D find_vm_area(p); if (!vm) @@ -307,33 +304,39 @@ static int execmem_cache_populate(struct execmem_rang= e *range, size_t size) if (err) goto err_free_mem; =20 - err =3D execmem_cache_add(p, alloc_size, GFP_KERNEL); + /* + * New memory blocks must be propagated and allocated as an atomic + * operation, otherwise it may be consumed by a parallel call + * to the execmem_cache_alloc function. + */ + mutex_lock(mutex); + err =3D execmem_cache_add_locked(p, alloc_size, GFP_KERNEL); if (err) goto err_reset_direct_map; =20 - return 0; + p =3D execmem_cache_alloc_locked(range, size); + + mutex_unlock(mutex); + + return p; =20 err_reset_direct_map: + mutex_unlock(mutex); execmem_set_direct_map_valid(vm, true); err_free_mem: vfree(p); - return err; + return NULL; } =20 static void *execmem_cache_alloc(struct execmem_range *range, size_t size) { void *p; - int err; =20 p =3D __execmem_cache_alloc(range, size); if (p) return p; =20 - err =3D execmem_cache_populate(range, size); - if (err) - return NULL; - - return __execmem_cache_alloc(range, size); + return execmem_cache_populate_alloc(range, size); } =20 static inline bool is_pending_free(void *ptr) --=20 2.53.0.851.ga537e3e6e9-goog