On Sun, Oct 26, 2025 at 8:40 AM Leon Hwang <leon.hwang@linux.dev> wrote:
>
> As [lru_,]percpu_hash maps support BPF_KPTR_{REF,PERCPU}, missing
> calls to 'bpf_obj_free_fields()' in 'pcpu_copy_value()' could cause the
> memory referenced by BPF_KPTR_{REF,PERCPU} fields to be held until the
> map gets freed.
>
> Fix this by calling 'bpf_obj_free_fields()' after
> 'copy_map_value[,_long]()' in 'pcpu_copy_value()'.
>
> Fixes: 65334e64a493 ("bpf: Support kptrs in percpu hashmap and percpu LRU hashmap")
> Signed-off-by: Leon Hwang <leon.hwang@linux.dev>
> ---
> kernel/bpf/hashtab.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
> index c2fcd0cd51e51..26308adc9ccb3 100644
> --- a/kernel/bpf/hashtab.c
> +++ b/kernel/bpf/hashtab.c
> @@ -950,12 +950,14 @@ static void pcpu_copy_value(struct bpf_htab *htab, void __percpu *pptr,
> if (!onallcpus) {
> /* copy true value_size bytes */
> copy_map_value(&htab->map, this_cpu_ptr(pptr), value);
> + bpf_obj_free_fields(htab->map.record, this_cpu_ptr(pptr));
would make sense to assign this_cpu_ptr() result in a local variable
and reuse it between copy_map_value and bpf_obj_free_fields().
Consider that for a follow up.
> } else {
> u32 size = round_up(htab->map.value_size, 8);
> int off = 0, cpu;
>
> for_each_possible_cpu(cpu) {
> copy_map_value_long(&htab->map, per_cpu_ptr(pptr, cpu), value + off);
> + bpf_obj_free_fields(htab->map.record, per_cpu_ptr(pptr, cpu));
> off += size;
> }
> }
> --
> 2.51.0
>