From nobody Tue Apr 7 02:34:29 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EA4AC33688B for ; Tue, 17 Mar 2026 08:17:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773735459; cv=none; b=Q6kqyEmvetw8Mjmxg34T/TEBEOt7Vet+ZdevRVaB/3CTLHb0ApMulHbgpNjZXuZB4qMpojwi6KQVbP3mqolqHDKJghUbwFuBGD9loL1t0bh6ocxfnrB2kaUlmXE1ViAJ4ul2hplAvnftGiGohsdT0HZ33XXlGCSYQPIwYOrxAnA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773735459; c=relaxed/simple; bh=jBYCjJmBms2DTTsjPA/t+ysWQ3lvt2urtM5GKlMCmk4=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=M2VIQNhoUH3wUWNp545pGeLOHocub9CSI6QdwbxwRAdeZctkrfNIPQ39uzm8Vy3cWCfQcd+D3kcgRx36bzl8dhS2pCqnve7+Q8mbmvjiyJqAEkW3zPIjPT3A9BYsD7DW9G/72k65c+DoYmHLhGEPb6MEYXAIRbaCQeVNeWBkVYg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=IW3jln2N; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="IW3jln2N" Received: by smtp.kernel.org (Postfix) with ESMTPS id B2A15C2BCB0; Tue, 17 Mar 2026 08:17:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773735458; bh=jBYCjJmBms2DTTsjPA/t+ysWQ3lvt2urtM5GKlMCmk4=; h=From:Date:Subject:References:In-Reply-To:To:Cc:Reply-To:From; b=IW3jln2NTRcvpUk+O1SS+2AT+d3HJQmMcE2yZoLKg8TW/d5AAd3Hazu6dSo5WOIMZ t7QueaTTXzVcfqJQOEpp4WrifSUUjakH0fTXe6q3QgRNNOkbeiLfiCRhYWYKLIvoi3 urovkdAI93Kdg6/JvBiHsJz+JE3em718uN5gRabTNjP+Jl0MmJdELBj/9cZOOuRd5m bolb+wvIf8/OHuqNHpa31gRR0AXznAzRhAkVeo+8vOpAeGZdnVrI+KqAyhaIoNFxnI nhlHtuNqp+cgO5HFhLIuUh2C6/O3Btymk+mS4SCjk5NgX5cX+HcpESfjohkSx5ZdiO x/yz10+Qiyx2w== Received: from aws-us-west-2-korg-lkml-1.web.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.lore.kernel.org (Postfix) with ESMTP id A4F11F3380E; Tue, 17 Mar 2026 08:17:38 +0000 (UTC) From: Shivam Kalra via B4 Relay Date: Tue, 17 Mar 2026 13:47:34 +0530 Subject: [PATCH v5 2/3] mm/vmalloc: free unused pages on vrealloc() shrink Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260317-vmalloc-shrink-v5-2-bbfbf54c5265@zohomail.in> References: <20260317-vmalloc-shrink-v5-0-bbfbf54c5265@zohomail.in> In-Reply-To: <20260317-vmalloc-shrink-v5-0-bbfbf54c5265@zohomail.in> To: Andrew Morton , Uladzislau Rezki Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Alice Ryhl , Danilo Krummrich , Shivam Kalra X-Mailer: b4 0.14.3 X-Developer-Signature: v=1; a=ed25519-sha256; t=1773735454; l=2798; i=shivamkalra98@zohomail.in; s=20260212; h=from:subject:message-id; bh=AcrwNUcas9eCuK1NFOLlNfncYjEO6WE6bt7Rp71YBEE=; b=lHMbWjW2Nez8n6V3NFUQo5wXOvs640kPXAiMbokRB7o+nYf/wCqWNYVkn8b1HLh8PKar7Ksl0 IDffUeDyhMFDbr5kNiV2yW0swOvB/QXZTFpWH7ZNOOCfKLplDGKUkpK X-Developer-Key: i=shivamkalra98@zohomail.in; a=ed25519; pk=9Q+S1LD/xjbjL7bEaLIlwRADBwU/6LJq7lYm8LFrkQE= X-Endpoint-Received: by B4 Relay for shivamkalra98@zohomail.in/20260212 with auth_id=633 X-Original-From: Shivam Kalra Reply-To: shivamkalra98@zohomail.in From: Shivam Kalra When vrealloc() shrinks an allocation and the new size crosses a page boundary, unmap and free the tail pages that are no longer needed. This reclaims physical memory that was previously wasted for the lifetime of the allocation. The heuristic is simple: always free when at least one full page becomes unused. Huge page allocations (page_order > 0) are skipped, as partial freeing would require splitting. Allocations with VM_FLUSH_RESET_PERMS are also skipped, as their direct-map permissions must be reset before pages are returned to the page allocator, which is handled by vm_reset_perms() during vfree(). The virtual address reservation (vm->size / vmap_area) is intentionally kept unchanged, preserving the address for potential future grow-in-place support. Fix the grow-in-place check to compare against vm->nr_pages rather than get_vm_area_size(), since the latter reflects the virtual reservation which does not shrink. Without this fix, a grow after shrink would access freed pages. Signed-off-by: Shivam Kalra Suggested-by: Danilo Krummrich --- mm/vmalloc.c | 20 +++++++++++++++----- 1 file changed, 15 insertions(+), 5 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index b29bf58c0e3f..f3820c6712c1 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -4345,14 +4345,24 @@ void *vrealloc_node_align_noprof(const void *p, siz= e_t size, unsigned long align goto need_realloc; } =20 - /* - * TODO: Shrink the vm_area, i.e. unmap and free unused pages. What - * would be a good heuristic for when to shrink the vm_area? - */ if (size <=3D old_size) { + unsigned int new_nr_pages =3D PAGE_ALIGN(size) >> PAGE_SHIFT; + /* Zero out "freed" memory, potentially for future realloc. */ if (want_init_on_free() || want_init_on_alloc(flags)) memset((void *)p + size, 0, old_size - size); + + /* Free tail pages when shrink crosses a page boundary. */ + if (new_nr_pages < vm->nr_pages && !vm_area_page_order(vm) && + !(vm->flags & VM_FLUSH_RESET_PERMS)) { + unsigned long addr =3D (unsigned long)p; + + vunmap_range(addr + (new_nr_pages << PAGE_SHIFT), + addr + (vm->nr_pages << PAGE_SHIFT)); + + vm_area_free_pages(vm, new_nr_pages, vm->nr_pages); + vm->nr_pages =3D new_nr_pages; + } vm->requested_size =3D size; kasan_vrealloc(p, old_size, size); return (void *)p; @@ -4361,7 +4371,7 @@ void *vrealloc_node_align_noprof(const void *p, size_= t size, unsigned long align /* * We already have the bytes available in the allocation; use them. */ - if (size <=3D alloced_size) { + if (size <=3D (size_t)vm->nr_pages << PAGE_SHIFT) { /* * No need to zero memory here, as unused memory will have * already been zeroed at initial allocation time or during --=20 2.43.0