From nobody Mon Feb 9 08:12:27 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DE15F33E34C; Sat, 7 Feb 2026 11:37:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770464223; cv=none; b=T92+78AoSOTAs2we+TM212Tj9ifnAlos4/lsYQ/p+6mPLdb7zKjZosLoKUHGVURhqJfiM0wJhVZdn3l7XlTGnQWfi7MIRhTKepI1UwNZV9ALnc4n7nqyxmJJpLaowrGTihLq4dlyKATZijCAkR/75ZBeT9Ga36etl2WJKQu7HWc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770464223; c=relaxed/simple; bh=Vu6LCkHLEagnBlnc2yB95toUMOvp1Ep3fa6cqun+Mxw=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=bHB9iDj0jWTXD2p/D+o6cbmdRWGe3JqCz7e7vOA4mTOlw4GM7LjtXu5xCfkiRi4/AlrXRKeQvTgoV26W03VtbO5+HfZcmozrh14Sg0nMVhYNKzRZH7Jay8Bag9rM1HF5QxpApxJlZN3yIfzOxTA0sO+EiTjIticQYMndzeHtTic= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=BvAg59y2; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="BvAg59y2" Received: by smtp.kernel.org (Postfix) with ESMTPS id 74AEAC19422; Sat, 7 Feb 2026 11:37:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1770464223; bh=Vu6LCkHLEagnBlnc2yB95toUMOvp1Ep3fa6cqun+Mxw=; h=From:Date:Subject:References:In-Reply-To:To:Cc:Reply-To:From; b=BvAg59y2Ggc9mD6wdNKjrbo2e14P+XEwLFZilNYgfwFr8UeME2p6iWYofE27rRn+o ob8yD/XdWSvp3WIbzkq2IxY1O9TEOjYPt+ZTdTVWx7PUQGSgA8x8uASm7reuc0G3kz jGrn09fkoy2NdZGKzDJLpCHewCgpmib6JcwdZJanyyzQ0+CNiHe4o0XMuHDhtiCf2O NmMd+dpVfFMILQJPzifx5s1zBfk0PLTspYshtgIpdd/+MgaR1b2uxQcJHOMmi6rxDU aSkhJHsZKvy5b1WIzzLylA5NCSKyO31JA5bVR82OlxhhH1AxxwpnZH8U5NYwTdgbg3 IADxXTaGDaTFQ== Received: from aws-us-west-2-korg-lkml-1.web.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.lore.kernel.org (Postfix) with ESMTP id 62CA5EE0ADA; Sat, 7 Feb 2026 11:37:03 +0000 (UTC) From: Shivam Kalra via B4 Relay Date: Sat, 07 Feb 2026 17:02:47 +0530 Subject: [PATCH v3 1/4] rust: alloc: introduce Shrinkable trait Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260207-binder-shrink-vec-v3-v3-1-8ff388563427@cock.li> References: <20260207-binder-shrink-vec-v3-v3-0-8ff388563427@cock.li> In-Reply-To: <20260207-binder-shrink-vec-v3-v3-0-8ff388563427@cock.li> To: Danilo Krummrich , Lorenzo Stoakes , Vlastimil Babka , "Liam R. Howlett" , Uladzislau Rezki , Miguel Ojeda , Boqun Feng , Gary Guo , =?utf-8?q?Bj=C3=B6rn_Roy_Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Trevor Gross , Greg Kroah-Hartman , =?utf-8?q?Arve_Hj=C3=B8nnev=C3=A5g?= , Todd Kjos , Christian Brauner , Carlos Llamas Cc: rust-for-linux@vger.kernel.org, linux-kernel@vger.kernel.org, Shivam Kalra X-Mailer: b4 0.14.3 X-Developer-Signature: v=1; a=ed25519-sha256; t=1770464219; l=3225; i=shivamklr@cock.li; s=20260206; h=from:subject:message-id; bh=jpKzN865IppJi7C0CyvO1g4N4Vl1EkzCBuFHxaHO0OE=; b=zhIqUwyAdly7/oTkm+yQc7lOMHTvmT3Eo+4h5sTrfCeCRighkyX4B4LuahXY3vuFS4AiuoOh5 KZTu2KbaoCsA/CSSM/LXWseQ3QK9QxZ/Y+AZtY+8JRmdF6nL8f4hLnk X-Developer-Key: i=shivamklr@cock.li; a=ed25519; pk=vMC4wm7HuB8IdkiHldCdtuViW0NTnShcRaMF50MWRFQ= X-Endpoint-Received: by B4 Relay for shivamklr@cock.li/20260206 with auth_id=628 X-Original-From: Shivam Kalra Reply-To: shivamklr@cock.li From: Shivam Kalra Introduce the `Shrinkable` trait to identify allocators that can meaningfully reclaim memory when an allocation is shrunk. In the kernel, the slab allocator (`Kmalloc`) uses fixed-size buckets, meaning a "shrink" operation often results in the same bucket size being used, yielding no actual memory savings. However, page-based allocators like `Vmalloc` can reclaim physical pages when the size reduction crosses a page boundary. This marker trait allows generic containers (like `KVec` or `KVVec`) to determine at compile-time or run-time (via `is_shrinkable`) if a shrinking operation is worth performing. Signed-off-by: Shivam Kalra --- rust/kernel/alloc/allocator.rs | 48 ++++++++++++++++++++++++++++++++++++++= ++++ 1 file changed, 48 insertions(+) diff --git a/rust/kernel/alloc/allocator.rs b/rust/kernel/alloc/allocator.rs index 63bfb91b36712..615799b680b55 100644 --- a/rust/kernel/alloc/allocator.rs +++ b/rust/kernel/alloc/allocator.rs @@ -251,6 +251,54 @@ unsafe fn realloc( } } =20 +/// Marker trait for allocators that support meaningful shrinking. +/// +/// Shrinking is only meaningful for allocators that can actually reclaim = memory. The slab +/// allocator (`Kmalloc`) uses fixed-size buckets and cannot reclaim memor= y when shrinking, +/// so it does not implement this trait. +/// +/// For `Vmalloc`, shrinking always makes sense since it uses page-granula= rity allocations. +/// For `KVmalloc`, shrinking only makes sense if the allocation is backed= by vmalloc (checked +/// at runtime via `is_vmalloc_addr`). +/// +/// # Note +/// +/// Currently, shrinking vmalloc allocations requires explicit alloc+copy+= free because +/// `vrealloc` does not support in-place shrinking (see TODO at `mm/vmallo= c.c:4316`). +/// Once `vrealloc` gains this capability, the shrink implementation can b= e simplified. +/// +/// # Safety +/// +/// Implementors must ensure that [`Shrinkable::is_shrinkable`] returns `t= rue` only when +/// shrinking the allocation would actually reclaim memory. +pub unsafe trait Shrinkable: Allocator { + /// Returns whether shrinking an allocation at the given pointer would= reclaim memory. + /// + /// # Safety + /// + /// `ptr` must be a valid pointer to an allocation made by this alloca= tor. + unsafe fn is_shrinkable(ptr: NonNull) -> bool; +} + +// SAFETY: `Vmalloc` always uses vmalloc, which allocates at page granular= ity. Shrinking a +// vmalloc allocation by at least one page will reclaim that memory. +unsafe impl Shrinkable for Vmalloc { + #[inline] + unsafe fn is_shrinkable(_ptr: NonNull) -> bool { + true + } +} + +// SAFETY: `KVmalloc` may use either kmalloc or vmalloc. We check at runti= me using +// `is_vmalloc_addr` to determine if shrinking would be meaningful. +unsafe impl Shrinkable for KVmalloc { + #[inline] + unsafe fn is_shrinkable(ptr: NonNull) -> bool { + // SAFETY: `ptr` is a valid pointer by the safety requirements of = this function. + unsafe { bindings::is_vmalloc_addr(ptr.as_ptr().cast()) } + } +} + #[macros::kunit_tests(rust_allocator)] mod tests { use super::*; --=20 2.43.0 From nobody Mon Feb 9 08:12:27 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E0A6233FE04; Sat, 7 Feb 2026 11:37:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770464224; cv=none; b=Xze2d7l+SJIezo0yJlmKP+rPggiML8hh1bpN5Z17CEKao2+R92boi5obDOg1L8UxM2QSrrXcBBQf6EfpKit+rJ/XA7FMIFOrA0TIItQaLW7Ocw/tYEga0iHojvwEgI62sc0VgCvxjcDH80HOo4TdUXA5NWhEhEbL4QsNrMitqsw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770464224; c=relaxed/simple; bh=bz1+NChw9jqeqpfXTkTO8no1oUG2MeIxrIxupn++pRQ=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=R2+yCd3DtxNb0Twu6FYz54q+mDroPEnlPuk9NruJY2EcFiwEKmzEBh6VvmINxYQYy84FmAZub8yBU2l50nCHV1TlXPyxYAdnTS5RvVXRs17j8GuLbYz9sFKaIe5LqfBjXXSIuBXkrh6baEKwaW9Wx2dBc8FtZ4e4Sz9t/wKJKqA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=UDB2FQ+j; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="UDB2FQ+j" Received: by smtp.kernel.org (Postfix) with ESMTPS id 84330C19425; Sat, 7 Feb 2026 11:37:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1770464223; bh=bz1+NChw9jqeqpfXTkTO8no1oUG2MeIxrIxupn++pRQ=; h=From:Date:Subject:References:In-Reply-To:To:Cc:Reply-To:From; b=UDB2FQ+jFq7ynVBBJhEo8nPgg+peXJPPj0sjpzpoNI92Ezz5ZMD6RzX2CtCa+0i6v JXiVFSUQDfXxcGrVhJhe/VpVqfP93TmjoRK3tIk72SokOMFfceYT0wvKTVzjcZCtn1 D7sW4DHYqOiUBXWiqQJuGKfL/IOKCLHIGp5fQ0sR6FJ0BVFaru5q65RznrGqpUaTLQ +Lvg/l5K8pc/UL8cis1shTnB103WJEsx7S6IL2DzOdFWy0fjnxpDTtEbfEYuvlhpdC krjh6o+xA70DiRyf7BFFC5YBGmuipjlvN1cvlP9CRXMjJu1PNscitFyDjmQOsHZ/BY mRZl+Rfa/2ymw== Received: from aws-us-west-2-korg-lkml-1.web.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.lore.kernel.org (Postfix) with ESMTP id 74830EE0AD9; Sat, 7 Feb 2026 11:37:03 +0000 (UTC) From: Shivam Kalra via B4 Relay Date: Sat, 07 Feb 2026 17:02:48 +0530 Subject: [PATCH v3 2/4] rust: kvec: implement shrink_to and shrink_to_fit for Vec Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260207-binder-shrink-vec-v3-v3-2-8ff388563427@cock.li> References: <20260207-binder-shrink-vec-v3-v3-0-8ff388563427@cock.li> In-Reply-To: <20260207-binder-shrink-vec-v3-v3-0-8ff388563427@cock.li> To: Danilo Krummrich , Lorenzo Stoakes , Vlastimil Babka , "Liam R. Howlett" , Uladzislau Rezki , Miguel Ojeda , Boqun Feng , Gary Guo , =?utf-8?q?Bj=C3=B6rn_Roy_Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Trevor Gross , Greg Kroah-Hartman , =?utf-8?q?Arve_Hj=C3=B8nnev=C3=A5g?= , Todd Kjos , Christian Brauner , Carlos Llamas Cc: rust-for-linux@vger.kernel.org, linux-kernel@vger.kernel.org, Shivam Kalra X-Mailer: b4 0.14.3 X-Developer-Signature: v=1; a=ed25519-sha256; t=1770464219; l=6276; i=shivamklr@cock.li; s=20260206; h=from:subject:message-id; bh=VS6hjlCsw50rC5LExWM9OsFec4LNbO7EsbpLvNDwBLA=; b=57DJT0+/d7Rh/Wgps3VfhfwTnf7wGAWgIIJ2fdXHeWcxDZS3o0pdS/jDtsIsxzjonLPAm6nMj VIjwgQY74KgDcejMewKsGWCBABV/FCLmiZ1OCoyqZ+PUgYXl0KG48ps X-Developer-Key: i=shivamklr@cock.li; a=ed25519; pk=vMC4wm7HuB8IdkiHldCdtuViW0NTnShcRaMF50MWRFQ= X-Endpoint-Received: by B4 Relay for shivamklr@cock.li/20260206 with auth_id=628 X-Original-From: Shivam Kalra Reply-To: shivamklr@cock.li From: Shivam Kalra Implement `shrink_to` and `shrink_to_fit` methods for `Vec` where `A` implements the `Shrinkable` trait. `shrink_to` reduces the vector's capacity to a specified minimum, while `shrink_to_fit` attempts to shrink capacity to match the current length. Both methods only perform shrinking when it would be beneficial: - The allocator must support meaningful shrinking (checked via the `Shrinkable` trait bound and `is_shrinkable` runtime check). - The operation must free at least one page of memory. This prevents unnecessary allocations (where shrinking provides no benefit) while allowing to reclaim unused memory. The implementation uses explicit alloc+copy+free because `vrealloc` does not yet support in-place shrinking. A TODO note marks this for future optimization once the kernel's `vrealloc` gains that capability. Suggested-by: Alice Ryhl Suggested-by: Danilo Krummrich Signed-off-by: Shivam Kalra --- rust/kernel/alloc/kvec.rs | 111 ++++++++++++++++++++++++++++++++++++++++++= +++- 1 file changed, 109 insertions(+), 2 deletions(-) diff --git a/rust/kernel/alloc/kvec.rs b/rust/kernel/alloc/kvec.rs index ac8d6f763ae81..22a327d69c061 100644 --- a/rust/kernel/alloc/kvec.rs +++ b/rust/kernel/alloc/kvec.rs @@ -3,13 +3,13 @@ //! Implementation of [`Vec`]. =20 use super::{ - allocator::{KVmalloc, Kmalloc, Vmalloc, VmallocPageIter}, + allocator::{KVmalloc, Kmalloc, Shrinkable, Vmalloc, VmallocPageIter}, layout::ArrayLayout, AllocError, Allocator, Box, Flags, NumaNode, }; use crate::{ fmt, - page::AsPageIter, // + page::{AsPageIter, PAGE_SIZE}, }; use core::{ borrow::{Borrow, BorrowMut}, @@ -735,6 +735,113 @@ pub fn retain(&mut self, mut f: impl FnMut(&mut T) ->= bool) { } } =20 +impl Vec { + /// Shrinks the capacity of the vector with a lower bound. + /// + /// The capacity will remain at least as large as both the length and = the supplied value. + /// If the current capacity is less than the lower limit, this is a no= -op. + /// + /// Shrinking only occurs if: + /// - The allocator supports shrinking for this allocation (see [`Shri= nkable`]). + /// - The operation would free at least one page of memory. + /// + /// If these conditions are not met, the vector is left unchanged. + /// + /// # Examples + /// + /// ``` + /// use kernel::alloc::allocator::Vmalloc; + /// + /// // Allocate enough capacity to span multiple pages. + /// let elements_per_page =3D kernel::page::PAGE_SIZE / core::mem::siz= e_of::(); + /// let mut v: Vec =3D Vec::with_capacity(elements_per_p= age * 4, GFP_KERNEL)?; + /// v.push(1, GFP_KERNEL)?; + /// v.push(2, GFP_KERNEL)?; + /// + /// v.shrink_to(0, GFP_KERNEL)?; + /// # Ok::<(), Error>(()) + /// ``` + pub fn shrink_to(&mut self, min_capacity: usize, flags: Flags) -> Resu= lt<(), AllocError> { + let target_cap =3D core::cmp::max(self.len(), min_capacity); + + if self.capacity() <=3D target_cap { + return Ok(()); + } + + if Self::is_zst() { + return Ok(()); + } + + // SAFETY: `self.ptr` is valid by the type invariant. + if !unsafe { A::is_shrinkable(self.ptr.cast()) } { + return Ok(()); + } + + // Only shrink if we would free at least one page. + let current_size =3D self.capacity() * core::mem::size_of::(); + let target_size =3D target_cap * core::mem::size_of::(); + let current_pages =3D current_size.div_ceil(PAGE_SIZE); + let target_pages =3D target_size.div_ceil(PAGE_SIZE); + + if current_pages <=3D target_pages { + return Ok(()); + } + + if target_cap =3D=3D 0 { + if !self.layout.is_empty() { + // SAFETY: `self.ptr` was allocated with `A`, layout match= es. + unsafe { A::free(self.ptr.cast(), self.layout.into()) }; + } + self.ptr =3D NonNull::dangling(); + self.layout =3D ArrayLayout::empty(); + return Ok(()); + } + + // SAFETY: `target_cap <=3D self.capacity()` and original capacity= was valid. + let new_layout =3D unsafe { ArrayLayout::::new_unchecked(target= _cap) }; + + // TODO: Once vrealloc supports in-place shrinking (mm/vmalloc.c:4= 316), this + // explicit alloc+copy+free can potentially be replaced with reall= oc. + let new_ptr =3D A::alloc(new_layout.into(), flags, NumaNode::NO_NO= DE)?; + + // SAFETY: Both pointers are valid, non-overlapping, and properly = aligned. + unsafe { + ptr::copy_nonoverlapping(self.as_ptr(), new_ptr.as_ptr().cast:= :(), self.len); + } + + // SAFETY: `self.ptr` was allocated with `A`, layout matches. + unsafe { A::free(self.ptr.cast(), self.layout.into()) }; + + // SAFETY: `new_ptr` is non-null because `A::alloc` succeeded. + self.ptr =3D unsafe { NonNull::new_unchecked(new_ptr.as_ptr().cast= ::()) }; + self.layout =3D new_layout; + + Ok(()) + } + + /// Shrinks the capacity of the vector as much as possible. + /// + /// This is equivalent to calling `shrink_to(0, flags)`. See [`Vec::sh= rink_to`] for details. + /// + /// # Examples + /// + /// ``` + /// use kernel::alloc::allocator::Vmalloc; + /// + /// let elements_per_page =3D kernel::page::PAGE_SIZE / core::mem::siz= e_of::(); + /// let mut v: Vec =3D Vec::with_capacity(elements_per_p= age * 4, GFP_KERNEL)?; + /// v.push(1, GFP_KERNEL)?; + /// v.push(2, GFP_KERNEL)?; + /// v.push(3, GFP_KERNEL)?; + /// + /// v.shrink_to_fit(GFP_KERNEL)?; + /// # Ok::<(), Error>(()) + /// ``` + pub fn shrink_to_fit(&mut self, flags: Flags) -> Result<(), AllocError= > { + self.shrink_to(0, flags) + } +} + impl Vec { /// Extend the vector by `n` clones of `value`. pub fn extend_with(&mut self, n: usize, value: T, flags: Flags) -> Res= ult<(), AllocError> { --=20 2.43.0 From nobody Mon Feb 9 08:12:27 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DE0E733E346; Sat, 7 Feb 2026 11:37:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770464223; cv=none; b=NpBTy2hwAfaem4CWNGEsGD7cQLcReEG+5Xb1eDSN1AYamoT9PA3r4keW6bYnuReVGDhwtG2TpEqFg81zYWyXQsRpfl/aA2R0Lv9ubTZNLBhRxOm7HCrZ07Glz0VLXaX1NsekWLO5HZ4axRT6tUvjOkMnNqp4r5Har6ko7iwwZds= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770464223; c=relaxed/simple; bh=FyMsfRksnV7rJFrVFyGIsI8M1GFYOiy/knSgCLU8ejo=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=UlTInetkMDsQmcGpy5ulJBfNwDNdMtyBnV9bL6MSQvjVmEmtu5Rpmes1ZKi5LK9Tm6IgtTUt4QdqcAPxA5OcyzPLja4ylcPLUNn6rE1c+n5hL/AW20Xx8i/PsxdMYMBcXX+NnvcteEOw/zY7C9ofutRSh3QpqOVzfyxrC0P0Xbg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=eSv0tQ5d; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="eSv0tQ5d" Received: by smtp.kernel.org (Postfix) with ESMTPS id AB2EFC2BCB0; Sat, 7 Feb 2026 11:37:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1770464223; bh=FyMsfRksnV7rJFrVFyGIsI8M1GFYOiy/knSgCLU8ejo=; h=From:Date:Subject:References:In-Reply-To:To:Cc:Reply-To:From; b=eSv0tQ5donQ1Lb7xGH99sej03Ekn1+JuxQpWnLFcYkW4aeBFUSX4xI6oCfxCjh8it WvX94G+/Tu58329tnap+sP4aeFvwbWdC+Wh975hN7f+0mU1oWPc/ih5orXsV7m+cyc bnPQeL26xiR7HiqERrQflBkKN9yKiyim341DOPWereupX0YspcMIhLznjmMF9dDm/f VXk+5+liUOyuxS1a0m1v7NOfEKf1ckJrXhzlD6iPS355nQSpnY1SbumC+fpkhWrU/4 EhK1qxOvxK16R51f4rEYYjMMcQTr8w3WL0Afet/qJdJ0kaWm3EXom++neEiwu9nrcu LOPZJJY01Cy8Q== Received: from aws-us-west-2-korg-lkml-1.web.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9EC67EE0ADA; Sat, 7 Feb 2026 11:37:03 +0000 (UTC) From: Shivam Kalra via B4 Relay Date: Sat, 07 Feb 2026 17:02:49 +0530 Subject: [PATCH v3 3/4] rust: alloc: add KUnit tests for Vec shrink operations Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260207-binder-shrink-vec-v3-v3-3-8ff388563427@cock.li> References: <20260207-binder-shrink-vec-v3-v3-0-8ff388563427@cock.li> In-Reply-To: <20260207-binder-shrink-vec-v3-v3-0-8ff388563427@cock.li> To: Danilo Krummrich , Lorenzo Stoakes , Vlastimil Babka , "Liam R. Howlett" , Uladzislau Rezki , Miguel Ojeda , Boqun Feng , Gary Guo , =?utf-8?q?Bj=C3=B6rn_Roy_Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Trevor Gross , Greg Kroah-Hartman , =?utf-8?q?Arve_Hj=C3=B8nnev=C3=A5g?= , Todd Kjos , Christian Brauner , Carlos Llamas Cc: rust-for-linux@vger.kernel.org, linux-kernel@vger.kernel.org, Shivam Kalra X-Mailer: b4 0.14.3 X-Developer-Signature: v=1; a=ed25519-sha256; t=1770464219; l=8169; i=shivamklr@cock.li; s=20260206; h=from:subject:message-id; bh=NHTvS6dyqkW3XRY4+kOP853dlPQBD1euqXKRvwPwRzM=; b=e0veNa+pVF5XE5buNoXhgDCwpn/qL2W0am/1lsecsDXVsQ3QhZYkH+iDPcmf7Aexl2WmqDJ/u Tic5PXAM7jKCEOJNGA4JSox4fJ3Hb/vdbpxEuqs7Nos+tGBJYwCMTtj X-Developer-Key: i=shivamklr@cock.li; a=ed25519; pk=vMC4wm7HuB8IdkiHldCdtuViW0NTnShcRaMF50MWRFQ= X-Endpoint-Received: by B4 Relay for shivamklr@cock.li/20260206 with auth_id=628 X-Original-From: Shivam Kalra Reply-To: shivamklr@cock.li From: Shivam Kalra Add comprehensive KUnit tests for `shrink_to` and `shrink_to_fit` methods across different allocator backends (Vmalloc and KVmalloc). The tests verify: - Basic shrinking from multiple pages to less than one page - Data integrity preservation after shrinking - No-op behavior when shrinking would not free pages - Empty vector shrinking - Partial shrinking with min_capacity constraints - Consecutive shrink operations - KVVec shrinking behavior for both small (kmalloc-backed) and large (vmalloc-backed) allocations These tests ensure that the shrinking logic correctly identifies when memory can be reclaimed and that the `Shrinkable` trait implementation works as expected. Signed-off-by: Shivam Kalra --- rust/kernel/alloc/kvec.rs | 185 ++++++++++++++++++++++++++++++++++++++++++= ++++ 1 file changed, 185 insertions(+) diff --git a/rust/kernel/alloc/kvec.rs b/rust/kernel/alloc/kvec.rs index 22a327d69c06..e7d4ba11c2b0 100644 --- a/rust/kernel/alloc/kvec.rs +++ b/rust/kernel/alloc/kvec.rs @@ -1505,4 +1505,189 @@ fn add(value: &mut [bool]) { func.push_within_capacity(false).unwrap(); } } + + /// Test basic shrink_to functionality for VVec. + /// + /// Verifies that: + /// - Shrinking from multiple pages to less than one page works correc= tly. + /// - Data integrity is preserved after shrinking. + /// - Shrinking an already-optimal vector is a no-op. + /// - Requesting a min_capacity larger than current capacity is a no-o= p. + #[test] + fn test_shrink_to_vmalloc() { + use crate::page::PAGE_SIZE; + + let elements_per_page =3D PAGE_SIZE / core::mem::size_of::(); + let initial_pages =3D 4; + let initial_capacity =3D elements_per_page * initial_pages; + + let mut v: VVec =3D VVec::with_capacity(initial_capacity, GFP= _KERNEL).unwrap(); + + for i in 0..10 { + v.push(i, GFP_KERNEL).unwrap(); + } + + assert!(v.capacity() >=3D initial_capacity); + assert_eq!(v.len(), 10); + + // Shrink from 4 pages to less than 1 page. + v.shrink_to(0, GFP_KERNEL).unwrap(); + + // Verify data integrity. + assert_eq!(v.len(), 10); + for i in 0..10 { + assert_eq!(v[i], i as u32); + } + + assert!(v.capacity() >=3D 10); + assert!(v.capacity() < initial_capacity); + + // Already optimal: should be a no-op. + let cap_after_shrink =3D v.capacity(); + v.shrink_to(0, GFP_KERNEL).unwrap(); + assert_eq!(v.capacity(), cap_after_shrink); + + // min_capacity > capacity: should be a no-op (never grows). + v.shrink_to(initial_capacity * 2, GFP_KERNEL).unwrap(); + assert_eq!(v.capacity(), cap_after_shrink); + } + + /// Test that shrink_to is a no-op when no pages would be freed. + /// + /// Verifies that: + /// - When current and target capacity both fit in one page, no shrink= occurs. + /// - The shrink_to_fit wrapper behaves identically to shrink_to(0). + #[test] + fn test_shrink_to_vmalloc_no_page_savings() { + use crate::page::PAGE_SIZE; + + let elements_per_page =3D PAGE_SIZE / core::mem::size_of::(); + + let mut v: VVec =3D VVec::with_capacity(elements_per_page, GF= P_KERNEL).unwrap(); + + for i in 0..(elements_per_page / 2) { + v.push(i as u32, GFP_KERNEL).unwrap(); + } + + let cap_before =3D v.capacity(); + + // No page savings: capacity unchanged. + v.shrink_to(0, GFP_KERNEL).unwrap(); + assert_eq!(v.capacity(), cap_before); + + // shrink_to_fit wrapper: same behavior. + v.shrink_to_fit(GFP_KERNEL).unwrap(); + assert_eq!(v.capacity(), cap_before); + } + + /// Test shrink_to on an empty VVec. + /// + /// Verifies that shrinking an empty vector to capacity 0 frees the al= location. + #[test] + fn test_shrink_to_vmalloc_empty() { + use crate::page::PAGE_SIZE; + + let elements_per_page =3D PAGE_SIZE / core::mem::size_of::(); + let initial_capacity =3D elements_per_page * 2; + + let mut v: VVec =3D VVec::with_capacity(initial_capacity, GFP= _KERNEL).unwrap(); + assert!(v.capacity() >=3D initial_capacity); + + // Shrink empty vector: frees allocation. + v.shrink_to(0, GFP_KERNEL).unwrap(); + assert_eq!(v.capacity(), 0); + assert_eq!(v.len(), 0); + } + + /// Test partial shrink and consecutive shrink operations. + /// + /// Verifies that: + /// - Shrinking with min_capacity > len but still saving pages works. + /// - Consecutive shrink calls maintain data integrity. + #[test] + fn test_shrink_to_vmalloc_partial_and_consecutive() { + use crate::page::PAGE_SIZE; + + let elements_per_page =3D PAGE_SIZE / core::mem::size_of::(); + + let mut v: VVec =3D VVec::with_capacity(elements_per_page * 4= , GFP_KERNEL).unwrap(); + + // Fill with ~2.5 pages worth of elements. + let target_elements =3D elements_per_page * 2 + elements_per_page = / 2; + for i in 0..target_elements { + v.push(i as u32, GFP_KERNEL).unwrap(); + } + + // Partial shrink: 4 pages -> 3 pages (min_capacity > len). + let min_cap_3_pages =3D elements_per_page * 3; + v.shrink_to(min_cap_3_pages, GFP_KERNEL).unwrap(); + assert!(v.capacity() >=3D min_cap_3_pages); + assert!(v.capacity() < elements_per_page * 4); + assert_eq!(v.len(), target_elements); + + for i in 0..target_elements { + assert_eq!(v[i], i as u32); + } + + // Consecutive shrink: verify layout remains consistent. + let cap_before =3D v.capacity(); + v.shrink_to(0, GFP_KERNEL).unwrap(); + assert!(v.capacity() >=3D target_elements); + assert!(v.capacity() <=3D cap_before); + + for i in 0..target_elements { + assert_eq!(v[i], i as u32); + } + } + + /// Test KVVec shrink with small allocation (kmalloc-backed). + /// + /// KVmalloc uses kmalloc for small allocations. Since kmalloc cannot = reclaim + /// memory when shrinking, shrink_to should be a no-op for small KVVec. + #[test] + fn test_shrink_to_kvvec_small() { + // Small allocation: likely kmalloc-backed, shrink should be no-op. + let mut v: KVVec =3D KVVec::with_capacity(10, GFP_KERNEL).unw= rap(); + for i in 0..5 { + v.push(i, GFP_KERNEL).unwrap(); + } + + let cap_before =3D v.capacity(); + v.shrink_to(0, GFP_KERNEL).unwrap(); + + // Kmalloc-backed: capacity unchanged (is_shrinkable returns false= ). + assert_eq!(v.capacity(), cap_before); + assert_eq!(v.len(), 5); + } + + /// Test KVVec shrink with large allocation (vmalloc-backed). + /// + /// KVmalloc falls back to vmalloc for large allocations. When vmalloc= -backed + /// and page savings are possible, shrink_to should actually shrink. + #[test] + fn test_shrink_to_kvvec_large() { + use crate::page::PAGE_SIZE; + + let elements_per_page =3D PAGE_SIZE / core::mem::size_of::(); + let initial_capacity =3D elements_per_page * 4; + + // Large allocation: likely vmalloc-backed. + let mut v: KVVec =3D KVVec::with_capacity(initial_capacity, G= FP_KERNEL).unwrap(); + for i in 0..10 { + v.push(i, GFP_KERNEL).unwrap(); + } + + assert!(v.capacity() >=3D initial_capacity); + + // Shrink from 4 pages to <1 page. + v.shrink_to(0, GFP_KERNEL).unwrap(); + + // Vmalloc-backed with page savings: should shrink. + // Note: If allocation happened to use kmalloc, capacity won't cha= nge. + // This test verifies the path works; actual behavior depends on a= llocator. + assert_eq!(v.len(), 10); + for i in 0..10 { + assert_eq!(v[i], i as u32); + } + } } --=20 2.43.0 From nobody Mon Feb 9 08:12:27 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DE06933A9FD; Sat, 7 Feb 2026 11:37:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770464223; cv=none; b=H8Q2B2++yF/fVslbZTg2V/NGoy18N3kENENbjRl2yvfIXP4ZFpYTo+ughZHc9IvLjqxIDUUF0re087kwbmMQWtrjxqFo+xauUWYkUHPKLfIleomAPF2sUkm69bKDw8aOmwOcuzadraF/3duerwUghMk3AsMyWXxVrO4a70Jp8KQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770464223; c=relaxed/simple; bh=3/bQTf1INaLlfRJc7e59E8lfmykbK2y6Fi19hvzecaI=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=c1yCXyt0FZO7hz+dBtb0u1knPr00EyWjY0df8oXt5cW++WP9QtWw8qBTke243Z0cnvIFHeM8e+aIgzK9MfrXf7SI7fhiMdencULJH9g/Io2QE6ZYU391a+Wy0dJnxxiA3HUwrcGewi/+bR8kOD+WpySUZN4iaCw7oRV3cuPpYRw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=DWcMCltb; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="DWcMCltb" Received: by smtp.kernel.org (Postfix) with ESMTPS id B8B6CC116D0; Sat, 7 Feb 2026 11:37:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1770464223; bh=3/bQTf1INaLlfRJc7e59E8lfmykbK2y6Fi19hvzecaI=; h=From:Date:Subject:References:In-Reply-To:To:Cc:Reply-To:From; b=DWcMCltbrEK58LRIi7PC6k0ERTSkh9Uyop6UL4ddJkAfMYBjRtywCtYnCcMdVAQoX IdVb2VuU0117UX4NQ1qeTxepEzkGYRWOyylDdtyiMtDii6lonlT9qfpmLeagRxmdfo 7aiyHwaNFHF1t+dm1CsWP9xqIUJJHOFUOweren7TTBI8445RSGWX3k4LQ/6yyYdTZQ kytwPg3Gglt5y7l4DJEwkKGEKTFWHnjML+AaKZlX/314q5vfsKNy8NdwWFNxG5z53E Twe8BpkIT1Hmrqh1EJHb7hNecC/Rb6mJyyd8weDupulj+qlQ57jCHKSBTJuNiJQXls G5CN4KCb9ycJg== Received: from aws-us-west-2-korg-lkml-1.web.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.lore.kernel.org (Postfix) with ESMTP id AFF18EE0ADC; Sat, 7 Feb 2026 11:37:03 +0000 (UTC) From: Shivam Kalra via B4 Relay Date: Sat, 07 Feb 2026 17:02:50 +0530 Subject: [PATCH v3 4/4] rust: binder: shrink all_procs when deregistering processes Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260207-binder-shrink-vec-v3-v3-4-8ff388563427@cock.li> References: <20260207-binder-shrink-vec-v3-v3-0-8ff388563427@cock.li> In-Reply-To: <20260207-binder-shrink-vec-v3-v3-0-8ff388563427@cock.li> To: Danilo Krummrich , Lorenzo Stoakes , Vlastimil Babka , "Liam R. Howlett" , Uladzislau Rezki , Miguel Ojeda , Boqun Feng , Gary Guo , =?utf-8?q?Bj=C3=B6rn_Roy_Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Trevor Gross , Greg Kroah-Hartman , =?utf-8?q?Arve_Hj=C3=B8nnev=C3=A5g?= , Todd Kjos , Christian Brauner , Carlos Llamas Cc: rust-for-linux@vger.kernel.org, linux-kernel@vger.kernel.org, Shivam Kalra X-Mailer: b4 0.14.3 X-Developer-Signature: v=1; a=ed25519-sha256; t=1770464219; l=1661; i=shivamklr@cock.li; s=20260206; h=from:subject:message-id; bh=ZiCG70HlJMmGpDrvsFHeFaXEsLxFngyMnoVpWt9UY3M=; b=+mvhlR0YpQsBJJfKSWLVaJRMAoHR/0B3ErKsNbSazRmVprgymHifuQS3NyA0a5x8M/GFn7Uk9 CpOntfpgurlDbXz2ifUwmPYvl5M/7jUzN2wj2sSI+jAtS8LTt8yeszI X-Developer-Key: i=shivamklr@cock.li; a=ed25519; pk=vMC4wm7HuB8IdkiHldCdtuViW0NTnShcRaMF50MWRFQ= X-Endpoint-Received: by B4 Relay for shivamklr@cock.li/20260206 with auth_id=628 X-Original-From: Shivam Kalra Reply-To: shivamklr@cock.li From: Shivam Kalra When a process is deregistered from the binder context, the all_procs vector may have significant unused capacity. Add logic to shrink the vector when capacity exceeds 128 and usage drops below 50%, reducing memory overhead for long-running systems. The shrink operation uses GFP_KERNEL and is allowed to fail gracefully since it is purely an optimization. The vector remains valid and functional even if shrinking fails. Signed-off-by: Shivam Kalra --- drivers/android/binder/context.rs | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/drivers/android/binder/context.rs b/drivers/android/binder/con= text.rs index 9cf437c025a20..f2505fbf17403 100644 --- a/drivers/android/binder/context.rs +++ b/drivers/android/binder/context.rs @@ -94,6 +94,16 @@ pub(crate) fn deregister_process(self: &Arc, proc:= &Arc) { } let mut manager =3D self.manager.lock(); manager.all_procs.retain(|p| !Arc::ptr_eq(p, proc)); + + // Shrink the vector if it has significant unused capacity. + // Only shrink if capacity > 128 to avoid repeated reallocations f= or small vectors. + let len =3D manager.all_procs.len(); + let cap =3D manager.all_procs.capacity(); + if cap > 128 && len < cap / 2 { + // Shrink to current length. Ignore allocation failures since = this is just an + // optimization; the vector remains valid even if shrinking fa= ils. + let _ =3D manager.all_procs.shrink_to(len, GFP_KERNEL); + } } =20 pub(crate) fn set_manager_node(&self, node_ref: NodeRef) -> Result { --=20 2.43.0