[PATCH] rust_binder: fix oneway spam detection

Carlos Llamas posted 1 patch 1 month, 2 weeks ago
drivers/android/binder/range_alloc/array.rs | 35 +++++++++++++++++++--
drivers/android/binder/range_alloc/mod.rs   |  4 +--
drivers/android/binder/range_alloc/tree.rs  | 18 +++++------
3 files changed, 44 insertions(+), 13 deletions(-)
[PATCH] rust_binder: fix oneway spam detection
Posted by Carlos Llamas 1 month, 2 weeks ago
The spam detection logic in TreeRange was executed before the current
request was inserted into the tree. So the new request was not being
factored in the spam calculation. Fix this by moving the logic after
the new range has been inserted.

Also, the detection logic for ArrayRange was missing altogether which
meant large spamming transactions could get away without being detected.
Fix this by implementing an equivalent low_oneway_space() in ArrayRange.

Note that I looked into centralizing this logic in RangeAllocator but
iterating through 'state' and 'size' got a bit too complicated (for me)
and I abandoned this effort.

Cc: stable@vger.kernel.org
Cc: Alice Ryhl <aliceryhl@google.com>
Fixes: eafedbc7c050 ("rust_binder: add Rust Binder driver")
Signed-off-by: Carlos Llamas <cmllamas@google.com>
---
 drivers/android/binder/range_alloc/array.rs | 35 +++++++++++++++++++--
 drivers/android/binder/range_alloc/mod.rs   |  4 +--
 drivers/android/binder/range_alloc/tree.rs  | 18 +++++------
 3 files changed, 44 insertions(+), 13 deletions(-)

diff --git a/drivers/android/binder/range_alloc/array.rs b/drivers/android/binder/range_alloc/array.rs
index 07e1dec2ce63..ada1d1b4302e 100644
--- a/drivers/android/binder/range_alloc/array.rs
+++ b/drivers/android/binder/range_alloc/array.rs
@@ -118,7 +118,7 @@ pub(crate) fn reserve_new(
         size: usize,
         is_oneway: bool,
         pid: Pid,
-    ) -> Result<usize> {
+    ) -> Result<(usize, bool)> {
         // Compute new value of free_oneway_space, which is set only on success.
         let new_oneway_space = if is_oneway {
             match self.free_oneway_space.checked_sub(size) {
@@ -146,7 +146,38 @@ pub(crate) fn reserve_new(
             .ok()
             .unwrap();
 
-        Ok(insert_at_offset)
+        // Start detecting spammers once we have less than 20%
+        // of async space left (which is less than 10% of total
+        // buffer size).
+        //
+        // (This will short-circuit, so `low_oneway_space` is
+        // only called when necessary.)
+        let oneway_spam_detected =
+            is_oneway && new_oneway_space < self.size / 10 && self.low_oneway_space(pid);
+
+        Ok((insert_at_offset, oneway_spam_detected))
+    }
+
+    /// Find the amount and size of buffers allocated by the current caller.
+    ///
+    /// The idea is that once we cross the threshold, whoever is responsible
+    /// for the low async space is likely to try to send another async transaction,
+    /// and at some point we'll catch them in the act.  This is more efficient
+    /// than keeping a map per pid.
+    fn low_oneway_space(&self, calling_pid: Pid) -> bool {
+        let mut total_alloc_size = 0;
+        let mut num_buffers = 0;
+
+        // Warn if this pid has more than 50 transactions, or more than 50% of
+        // async space (which is 25% of total buffer size). Oneway spam is only
+        // detected when the threshold is exceeded.
+        for range in &self.ranges {
+            if range.state.is_oneway() && range.state.pid() == calling_pid {
+                total_alloc_size += range.size;
+                num_buffers += 1;
+            }
+        }
+        num_buffers > 50 || total_alloc_size > self.size / 4
     }
 
     pub(crate) fn reservation_abort(&mut self, offset: usize) -> Result<FreedRange> {
diff --git a/drivers/android/binder/range_alloc/mod.rs b/drivers/android/binder/range_alloc/mod.rs
index 2301e2bc1a1f..1f4734468ff1 100644
--- a/drivers/android/binder/range_alloc/mod.rs
+++ b/drivers/android/binder/range_alloc/mod.rs
@@ -188,11 +188,11 @@ pub(crate) fn reserve_new(&mut self, mut args: ReserveNewArgs<T>) -> Result<Rese
                 self.reserve_new(args)
             }
             Impl::Array(array) => {
-                let offset =
+                let (offset, oneway_spam_detected) =
                     array.reserve_new(args.debug_id, args.size, args.is_oneway, args.pid)?;
                 Ok(ReserveNew::Success(ReserveNewSuccess {
                     offset,
-                    oneway_spam_detected: false,
+                    oneway_spam_detected,
                     _empty_array_alloc: args.empty_array_alloc,
                     _new_tree_alloc: args.new_tree_alloc,
                     _tree_alloc: args.tree_alloc,
diff --git a/drivers/android/binder/range_alloc/tree.rs b/drivers/android/binder/range_alloc/tree.rs
index 838fdd2b47ea..48796fcdb362 100644
--- a/drivers/android/binder/range_alloc/tree.rs
+++ b/drivers/android/binder/range_alloc/tree.rs
@@ -164,15 +164,6 @@ pub(crate) fn reserve_new(
             self.free_oneway_space
         };
 
-        // Start detecting spammers once we have less than 20%
-        // of async space left (which is less than 10% of total
-        // buffer size).
-        //
-        // (This will short-circut, so `low_oneway_space` is
-        // only called when necessary.)
-        let oneway_spam_detected =
-            is_oneway && new_oneway_space < self.size / 10 && self.low_oneway_space(pid);
-
         let (found_size, found_off, tree_node, free_tree_node) = match self.find_best_match(size) {
             None => {
                 pr_warn!("ENOSPC from range_alloc.reserve_new - size: {}", size);
@@ -203,6 +194,15 @@ pub(crate) fn reserve_new(
             self.free_tree.insert(free_tree_node);
         }
 
+        // Start detecting spammers once we have less than 20%
+        // of async space left (which is less than 10% of total
+        // buffer size).
+        //
+        // (This will short-circuit, so `low_oneway_space` is
+        // only called when necessary.)
+        let oneway_spam_detected =
+            is_oneway && new_oneway_space < self.size / 10 && self.low_oneway_space(pid);
+
         Ok((found_off, oneway_spam_detected))
     }
 
-- 
2.53.0.239.g8d8fc8a987-goog
Re: [PATCH] rust_binder: fix oneway spam detection
Posted by Alice Ryhl 1 month, 2 weeks ago
On Tue, Feb 10, 2026 at 11:28:20PM +0000, Carlos Llamas wrote:
> The spam detection logic in TreeRange was executed before the current
> request was inserted into the tree. So the new request was not being
> factored in the spam calculation. Fix this by moving the logic after
> the new range has been inserted.
> 
> Also, the detection logic for ArrayRange was missing altogether which
> meant large spamming transactions could get away without being detected.
> Fix this by implementing an equivalent low_oneway_space() in ArrayRange.
> 
> Note that I looked into centralizing this logic in RangeAllocator but
> iterating through 'state' and 'size' got a bit too complicated (for me)
> and I abandoned this effort.

I think current approach is fine.

> Cc: stable@vger.kernel.org
> Cc: Alice Ryhl <aliceryhl@google.com>
> Fixes: eafedbc7c050 ("rust_binder: add Rust Binder driver")
> Signed-off-by: Carlos Llamas <cmllamas@google.com>

Reviewed-by: Alice Ryhl <aliceryhl@google.com>

> +    /// Find the amount and size of buffers allocated by the current caller.
> +    ///
> +    /// The idea is that once we cross the threshold, whoever is responsible
> +    /// for the low async space is likely to try to send another async transaction,
> +    /// and at some point we'll catch them in the act.  This is more efficient
> +    /// than keeping a map per pid.
> +    fn low_oneway_space(&self, calling_pid: Pid) -> bool {
> +        let mut total_alloc_size = 0;
> +        let mut num_buffers = 0;
> +
> +        // Warn if this pid has more than 50 transactions, or more than 50% of
> +        // async space (which is 25% of total buffer size). Oneway spam is only
> +        // detected when the threshold is exceeded.
> +        for range in &self.ranges {
> +            if range.state.is_oneway() && range.state.pid() == calling_pid {
> +                total_alloc_size += range.size;
> +                num_buffers += 1;
> +            }
> +        }
> +        num_buffers > 50 || total_alloc_size > self.size / 4

The array can never contain 50 buffers, but we should still keep this
check in case that's changed in the future.

Alice
Re: [PATCH] rust_binder: fix oneway spam detection
Posted by Tiffany Yang 1 month, 2 weeks ago
Alice Ryhl <aliceryhl@google.com> writes:

> On Tue, Feb 10, 2026 at 11:28:20PM +0000, Carlos Llamas wrote:
>> The spam detection logic in TreeRange was executed before the current
>> request was inserted into the tree. So the new request was not being
>> factored in the spam calculation. Fix this by moving the logic after
>> the new range has been inserted.
>> 
>> Also, the detection logic for ArrayRange was missing altogether which
>> meant large spamming transactions could get away without being detected.
>> Fix this by implementing an equivalent low_oneway_space() in ArrayRange.
>> 
>> Note that I looked into centralizing this logic in RangeAllocator but
>> iterating through 'state' and 'size' got a bit too complicated (for me)
>> and I abandoned this effort.
>
> I think current approach is fine.
>

Is there a pattern that would allow us to avoid so much duplicate code?
Or like... a nice way to call into a shared low_oneway_space? It's
frustrating that the two implementations are basically the same except
for how they iterate over buffers. I've been thinking of rust binder as
binder's chance at a fresh start, so I'm reticent to introduce this kind
of tech debt so early on.

I don't have a clear idea of what the appropriate fix would be here, but
I'd be happy to help do some plumbing if it'll make things smoother in
the long run!

>> Cc: stable@vger.kernel.org
>> Cc: Alice Ryhl <aliceryhl@google.com>
>> Fixes: eafedbc7c050 ("rust_binder: add Rust Binder driver")
>> Signed-off-by: Carlos Llamas <cmllamas@google.com>
>
> Reviewed-by: Alice Ryhl <aliceryhl@google.com>
>
>> +    /// Find the amount and size of buffers allocated by the current caller.
>> +    ///
>> +    /// The idea is that once we cross the threshold, whoever is responsible
>> +    /// for the low async space is likely to try to send another async transaction,
>> +    /// and at some point we'll catch them in the act.  This is more efficient
>> +    /// than keeping a map per pid.
>> +    fn low_oneway_space(&self, calling_pid: Pid) -> bool {
>> +        let mut total_alloc_size = 0;
>> +        let mut num_buffers = 0;
>> +
>> +        // Warn if this pid has more than 50 transactions, or more than 50% of
>> +        // async space (which is 25% of total buffer size). Oneway spam is only
>> +        // detected when the threshold is exceeded.
>> +        for range in &self.ranges {
>> +            if range.state.is_oneway() && range.state.pid() == calling_pid {
>> +                total_alloc_size += range.size;
>> +                num_buffers += 1;
>> +            }
>> +        }
>> +        num_buffers > 50 || total_alloc_size > self.size / 4
>
> The array can never contain 50 buffers, but we should still keep this
> check in case that's changed in the future.
>

Now I'm second guessing myself. The existing structure would make it
easy to use different logic to decide if something's spam depending on
the RangeAllocator and that might be helpful!

Who knows! Not I!

-- 
Tiffany Y. Yang
Re: [PATCH] rust_binder: fix oneway spam detection
Posted by Alice Ryhl 1 month, 2 weeks ago
On Wed, Feb 11, 2026 at 11:41:40PM -0800, Tiffany Yang wrote:
> Alice Ryhl <aliceryhl@google.com> writes:
> 
> > On Tue, Feb 10, 2026 at 11:28:20PM +0000, Carlos Llamas wrote:
> >> The spam detection logic in TreeRange was executed before the current
> >> request was inserted into the tree. So the new request was not being
> >> factored in the spam calculation. Fix this by moving the logic after
> >> the new range has been inserted.
> >> 
> >> Also, the detection logic for ArrayRange was missing altogether which
> >> meant large spamming transactions could get away without being detected.
> >> Fix this by implementing an equivalent low_oneway_space() in ArrayRange.
> >> 
> >> Note that I looked into centralizing this logic in RangeAllocator but
> >> iterating through 'state' and 'size' got a bit too complicated (for me)
> >> and I abandoned this effort.
> >
> > I think current approach is fine.
> >
> 
> Is there a pattern that would allow us to avoid so much duplicate code?
> Or like... a nice way to call into a shared low_oneway_space? It's
> frustrating that the two implementations are basically the same except
> for how they iterate over buffers. I've been thinking of rust binder as
> binder's chance at a fresh start, so I'm reticent to introduce this kind
> of tech debt so early on.
> 
> I don't have a clear idea of what the appropriate fix would be here, but
> I'd be happy to help do some plumbing if it'll make things smoother in
> the long run!

We could potentially have a single low_oneway_space() that takes an
iterator over the ranges. Then each impl can pass an iterator specific
to its own impl.

Alice
Re: [PATCH] rust_binder: fix oneway spam detection
Posted by Miguel Ojeda 1 month, 2 weeks ago
On Wed, Feb 11, 2026 at 12:30 AM Carlos Llamas <cmllamas@google.com> wrote:
>
> The spam detection logic in TreeRange was executed before the current
> request was inserted into the tree. So the new request was not being
> factored in the spam calculation. Fix this by moving the logic after
> the new range has been inserted.
>
> Also, the detection logic for ArrayRange was missing altogether which
> meant large spamming transactions could get away without being detected.
> Fix this by implementing an equivalent low_oneway_space() in ArrayRange.
>
> Note that I looked into centralizing this logic in RangeAllocator but
> iterating through 'state' and 'size' got a bit too complicated (for me)
> and I abandoned this effort.
>
> Cc: stable@vger.kernel.org
> Cc: Alice Ryhl <aliceryhl@google.com>
> Fixes: eafedbc7c050 ("rust_binder: add Rust Binder driver")
> Signed-off-by: Carlos Llamas <cmllamas@google.com>

Cc'ing rust-for-linux (we still do our best to send all Rust-related
patches there too, to have an index).

Cheers,
Miguel