From nobody Mon Feb 9 13:36:22 2026 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 4BDBE2FFDE6 for ; Mon, 5 Jan 2026 16:17:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767629879; cv=none; b=rhtaDRWfBD0rGQyMD4S5W/HYTh8BsBiLtbiDJEu1rC1kp8HkdVJ+j1GKIseFZTiTpgbx/tkmQRYthTv8UPtIh/PFGlUuhatCwNShvXqh4gu81YuJa4rMqS+uf+u1gdPUVIICbiiwqtOYy3u+sB+mvXXaPUeQNBdUoWDnsB0vPno= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767629879; c=relaxed/simple; bh=jSokdJepNsCHZO2YdMiw6xw26HngKP+UBvwowNj8vng=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=EQtqf9TqUsp5Cbve5Ymj23X6QEUThw9+K2gYwLi7MkGgVOZo5vHYRdR8BWQqdapZlbmtguR8O8cWDzbcKHDuLHO8nINJXDpZxevLo2nJQAFLotePVWWw7xGj9/0SHpIbFKqJcebCUfC3mxl3cMfncRwjOnszDzTEtn2wLjxsRfY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C66C7339; Mon, 5 Jan 2026 08:17:50 -0800 (PST) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C1F973F6A8; Mon, 5 Jan 2026 08:17:55 -0800 (PST) From: Ryan Roberts To: Andrew Morton , David Hildenbrand , Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Brendan Jackman , Johannes Weiner , Zi Yan , Uladzislau Rezki , "Vishal Moola (Oracle)" Cc: Ryan Roberts , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v1 2/2] vmalloc: Optimize vfree Date: Mon, 5 Jan 2026 16:17:38 +0000 Message-ID: <20260105161741.3952456-3-ryan.roberts@arm.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260105161741.3952456-1-ryan.roberts@arm.com> References: <20260105161741.3952456-1-ryan.roberts@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Whenever vmalloc allocates high order pages (e.g. for a huge mapping) it must immediately split_page() to order-0 so that it remains compatible with users that want to access the underlying struct page. Commit a06157804399 ("mm/vmalloc: request large order pages from buddy allocator") recently made it much more likely for vmalloc to allocate high order pages which are subsequently split to order-0. Unfortunately this had the side effect of causing performance regressions for tight vmalloc/vfree loops (e.g. test_vmalloc.ko benchmarks). See Closes: tag. This happens because the high order pages must be gotten from the buddy but then because they are split to order-0, when they are freed they are freed to the order-0 pcp. Previously allocation was for order-0 pages so they were recycled from the pcp. It would be preferable if when vmalloc allocates an (e.g.) order-3 page that it also frees that order-3 page to the order-3 pcp, then the regression could be removed. So let's do exactly that; use the new __free_contig_range() API to batch-free contiguous ranges of pfns. This not only removes the regression, but significantly improves performance of vfree beyond the baseline. A selection of test_vmalloc benchmarks running on AWS m7g.metal (arm64) system. v6.18 is the baseline. Commit a06157804399 ("mm/vmalloc: request large order pages from buddy allocator") was added in v6.19-rc1 where we see regressions. Then with this change performance is much better. (>0 is faster, <0 is slower, (R)/(I) =3D statistically significant Regression/Improvement): +----------------------------------------------------------+-------------+-= ------------+ | test_vmalloc benchmark | v6.19-rc1 | = v6.19-rc1 | | | | = + change | +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D+=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D+=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D+ | fix_align_alloc_test: p:1, h:0, l:500000 (usec) | (R) -40.69% | = (I) 4.85% | | fix_size_alloc_test: p:1, h:0, l:500000 (usec) | 0.10% | = -1.04% | | fix_size_alloc_test: p:4, h:0, l:500000 (usec) | (R) -22.74% | = (I) 14.12% | | fix_size_alloc_test: p:16, h:0, l:500000 (usec) | (R) -23.63% | = (I) 43.81% | | fix_size_alloc_test: p:16, h:1, l:500000 (usec) | -1.58% | = (I) 102.28% | | fix_size_alloc_test: p:64, h:0, l:100000 (usec) | (R) -24.39% | = (I) 89.64% | | fix_size_alloc_test: p:64, h:1, l:100000 (usec) | (I) 2.34% | = (I) 181.42% | | fix_size_alloc_test: p:256, h:0, l:100000 (usec) | (R) -23.29% | = (I) 111.05% | | fix_size_alloc_test: p:256, h:1, l:100000 (usec) | (I) 3.74% | = (I) 213.52% | | fix_size_alloc_test: p:512, h:0, l:100000 (usec) | (R) -23.80% | = (I) 118.28% | | fix_size_alloc_test: p:512, h:1, l:100000 (usec) | (R) -2.84% | = (I) 427.65% | | full_fit_alloc_test: p:1, h:0, l:500000 (usec) | 2.74% | = -1.12% | | kvfree_rcu_1_arg_vmalloc_test: p:1, h:0, l:500000 (usec) | 0.58% | = -0.79% | | kvfree_rcu_2_arg_vmalloc_test: p:1, h:0, l:500000 (usec) | -0.66% | = -0.91% | | long_busy_list_alloc_test: p:1, h:0, l:500000 (usec) | (R) -25.24% | = (I) 70.62% | | pcpu_alloc_test: p:1, h:0, l:500000 (usec) | -0.58% | = -1.27% | | random_size_align_alloc_test: p:1, h:0, l:500000 (usec) | (R) -45.75% | = (I) 11.11% | | random_size_alloc_test: p:1, h:0, l:500000 (usec) | (R) -28.16% | = (I) 59.47% | | vm_map_ram_test: p:1, h:0, l:500000 (usec) | -0.54% | = -0.85% | +----------------------------------------------------------+-------------+-= ------------+ Fixes: a06157804399 ("mm/vmalloc: request large order pages from buddy allo= cator") Closes: https://lore.kernel.org/all/66919a28-bc81-49c9-b68f-dd7c73395a0d@ar= m.com/ Signed-off-by: Ryan Roberts --- mm/vmalloc.c | 29 +++++++++++++++++++---------- 1 file changed, 19 insertions(+), 10 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 32d6ee92d4ff..86407178b6d1 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -3434,7 +3434,8 @@ void vfree_atomic(const void *addr) void vfree(const void *addr) { struct vm_struct *vm; - int i; + unsigned long start_pfn; + int i, nr; =20 if (unlikely(in_interrupt())) { vfree_atomic(addr); @@ -3460,17 +3461,25 @@ void vfree(const void *addr) /* All pages of vm should be charged to same memcg, so use first one. */ if (vm->nr_pages && !(vm->flags & VM_MAP_PUT_PAGES)) mod_memcg_page_state(vm->pages[0], MEMCG_VMALLOC, -vm->nr_pages); - for (i =3D 0; i < vm->nr_pages; i++) { - struct page *page =3D vm->pages[i]; =20 - BUG_ON(!page); - /* - * High-order allocs for huge vmallocs are split, so - * can be freed as an array of order-0 allocations - */ - __free_page(page); - cond_resched(); + if (vm->nr_pages) { + start_pfn =3D page_to_pfn(vm->pages[0]); + nr =3D 1; + for (i =3D 1; i < vm->nr_pages; i++) { + unsigned long pfn =3D page_to_pfn(vm->pages[i]); + + if (start_pfn + nr !=3D pfn) { + __free_contig_range(start_pfn, nr); + start_pfn =3D pfn; + nr =3D 1; + cond_resched(); + } else { + nr++; + } + } + __free_contig_range(start_pfn, nr); } + if (!(vm->flags & VM_MAP_PUT_PAGES)) atomic_long_sub(vm->nr_pages, &nr_vmalloc_pages); kvfree(vm->pages); --=20 2.43.0