When updating local storage maps with BPF_F_LOCK on the fast path, the
special fields were not freed after being replaced. This could cause
memory referenced by BPF_KPTR_{REF,PERCPU} fields to be held until the
map gets freed.
Similarly, on the other path, the old sdata's special fields were never
freed when BPF_F_LOCK was specified, causing the same issue.
Fix this by calling 'bpf_obj_free_fields()' after
'copy_map_value_locked()' to properly release the old fields.
Fixes: 9db44fdd8105 ("bpf: Support kptrs in local storage maps")
Signed-off-by: Leon Hwang <leon.hwang@linux.dev>
---
kernel/bpf/bpf_local_storage.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/kernel/bpf/bpf_local_storage.c b/kernel/bpf/bpf_local_storage.c
index b931fbceb54da..9f447530f9564 100644
--- a/kernel/bpf/bpf_local_storage.c
+++ b/kernel/bpf/bpf_local_storage.c
@@ -609,6 +609,7 @@ bpf_local_storage_update(void *owner, struct bpf_local_storage_map *smap,
if (old_sdata && selem_linked_to_storage_lockless(SELEM(old_sdata))) {
copy_map_value_locked(&smap->map, old_sdata->data,
value, false);
+ bpf_obj_free_fields(smap->map.record, old_sdata->data);
return old_sdata;
}
}
@@ -641,6 +642,7 @@ bpf_local_storage_update(void *owner, struct bpf_local_storage_map *smap,
if (old_sdata && (map_flags & BPF_F_LOCK)) {
copy_map_value_locked(&smap->map, old_sdata->data, value,
false);
+ bpf_obj_free_fields(smap->map.record, old_sdata->data);
selem = SELEM(old_sdata);
goto unlock;
}
--
2.51.1
On Thu, Oct 30, 2025 at 8:25 AM Leon Hwang <leon.hwang@linux.dev> wrote:
>
> When updating local storage maps with BPF_F_LOCK on the fast path, the
> special fields were not freed after being replaced. This could cause
> memory referenced by BPF_KPTR_{REF,PERCPU} fields to be held until the
> map gets freed.
>
> Similarly, on the other path, the old sdata's special fields were never
> freed when BPF_F_LOCK was specified, causing the same issue.
>
> Fix this by calling 'bpf_obj_free_fields()' after
> 'copy_map_value_locked()' to properly release the old fields.
>
> Fixes: 9db44fdd8105 ("bpf: Support kptrs in local storage maps")
> Signed-off-by: Leon Hwang <leon.hwang@linux.dev>
> ---
> kernel/bpf/bpf_local_storage.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/kernel/bpf/bpf_local_storage.c b/kernel/bpf/bpf_local_storage.c
> index b931fbceb54da..9f447530f9564 100644
> --- a/kernel/bpf/bpf_local_storage.c
> +++ b/kernel/bpf/bpf_local_storage.c
> @@ -609,6 +609,7 @@ bpf_local_storage_update(void *owner, struct bpf_local_storage_map *smap,
> if (old_sdata && selem_linked_to_storage_lockless(SELEM(old_sdata))) {
> copy_map_value_locked(&smap->map, old_sdata->data,
> value, false);
> + bpf_obj_free_fields(smap->map.record, old_sdata->data);
> return old_sdata;
> }
> }
> @@ -641,6 +642,7 @@ bpf_local_storage_update(void *owner, struct bpf_local_storage_map *smap,
> if (old_sdata && (map_flags & BPF_F_LOCK)) {
> copy_map_value_locked(&smap->map, old_sdata->data, value,
> false);
> + bpf_obj_free_fields(smap->map.record, old_sdata->data);
> selem = SELEM(old_sdata);
> goto unlock;
> }
Even with rqspinlock I feel this is a can of worms and
recursion issues.
I think it's better to disallow special fields and BPF_F_LOCK combination.
We already do that for uptr:
if ((map_flags & BPF_F_LOCK) &&
btf_record_has_field(map->record, BPF_UPTR))
return -EOPNOTSUPP;
let's do it for all special types.
So patches 2 and 3 will change to -EOPNOTSUPP.
pw-bot: cr
On 31/10/25 06:35, Alexei Starovoitov wrote:
> On Thu, Oct 30, 2025 at 8:25 AM Leon Hwang <leon.hwang@linux.dev> wrote:
>>
[...]
>> @@ -641,6 +642,7 @@ bpf_local_storage_update(void *owner, struct bpf_local_storage_map *smap,
>> if (old_sdata && (map_flags & BPF_F_LOCK)) {
>> copy_map_value_locked(&smap->map, old_sdata->data, value,
>> false);
>> + bpf_obj_free_fields(smap->map.record, old_sdata->data);
>> selem = SELEM(old_sdata);
>> goto unlock;
>> }
>
> Even with rqspinlock I feel this is a can of worms and
> recursion issues.
>
> I think it's better to disallow special fields and BPF_F_LOCK combination.
> We already do that for uptr:
> if ((map_flags & BPF_F_LOCK) &&
> btf_record_has_field(map->record, BPF_UPTR))
> return -EOPNOTSUPP;
>
> let's do it for all special types.
> So patches 2 and 3 will change to -EOPNOTSUPP.
>
Do you mean disallowing the combination of BPF_F_LOCK with other special
fields (except for BPF_SPIN_LOCK) on the UAPI side — for example, in
lookup_elem() and update_elem()?
If so, I'd like to send a separate patch set to implement that after the
series
“bpf: Introduce BPF_F_CPU and BPF_F_ALL_CPUS flags for percpu maps” is
applied.
After that, we can easily add the check in bpf_map_check_op_flags() for
the UAPI side, like this:
static inline int bpf_map_check_op_flags(...)
{
if ((flags & BPF_F_LOCK) && !btf_record_has_field(map->record,
BPF_SPIN_LOCK))
return -EINVAL;
if ((flags & BPF_F_LOCK) && btf_record_has_field(map->record,
~BPF_SPIN_LOCK))
return -EOPNOTSUPP;
}
Then we can clean up some code, including the bpf_obj_free_fields()
calls that follow copy_map_value_locked(), as well as the existing UPTR
check.
Thanks,
Leon
On Sun, Nov 2, 2025 at 9:18 PM Leon Hwang <leon.hwang@linux.dev> wrote:
>
>
>
> On 31/10/25 06:35, Alexei Starovoitov wrote:
> > On Thu, Oct 30, 2025 at 8:25 AM Leon Hwang <leon.hwang@linux.dev> wrote:
> >>
>
> [...]
>
> >> @@ -641,6 +642,7 @@ bpf_local_storage_update(void *owner, struct bpf_local_storage_map *smap,
> >> if (old_sdata && (map_flags & BPF_F_LOCK)) {
> >> copy_map_value_locked(&smap->map, old_sdata->data, value,
> >> false);
> >> + bpf_obj_free_fields(smap->map.record, old_sdata->data);
> >> selem = SELEM(old_sdata);
> >> goto unlock;
> >> }
> >
> > Even with rqspinlock I feel this is a can of worms and
> > recursion issues.
> >
> > I think it's better to disallow special fields and BPF_F_LOCK combination.
> > We already do that for uptr:
> > if ((map_flags & BPF_F_LOCK) &&
> > btf_record_has_field(map->record, BPF_UPTR))
> > return -EOPNOTSUPP;
> >
> > let's do it for all special types.
> > So patches 2 and 3 will change to -EOPNOTSUPP.
> >
>
> Do you mean disallowing the combination of BPF_F_LOCK with other special
> fields (except for BPF_SPIN_LOCK) on the UAPI side — for example, in
> lookup_elem() and update_elem()?
yes
> If so, I'd like to send a separate patch set to implement that after the
> series
> “bpf: Introduce BPF_F_CPU and BPF_F_ALL_CPUS flags for percpu maps” is
> applied.
>
> After that, we can easily add the check in bpf_map_check_op_flags() for
> the UAPI side, like this:
>
> static inline int bpf_map_check_op_flags(...)
> {
> if ((flags & BPF_F_LOCK) && !btf_record_has_field(map->record,
> BPF_SPIN_LOCK))
> return -EINVAL;
>
> if ((flags & BPF_F_LOCK) && btf_record_has_field(map->record,
> ~BPF_SPIN_LOCK))
> return -EOPNOTSUPP;
> }
>
> Then we can clean up some code, including the bpf_obj_free_fields()
> calls that follow copy_map_value_locked(), as well as the existing UPTR
> check.
ok. fair enough.
© 2016 - 2026 Red Hat, Inc.