From nobody Fri Oct 24 20:31:40 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 86BADC32762 for ; Mon, 15 Aug 2022 21:18:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1347535AbiHOVS1 (ORCPT ); Mon, 15 Aug 2022 17:18:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46246 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243926AbiHOVJ1 (ORCPT ); Mon, 15 Aug 2022 17:09:27 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4D3D254CB5; Mon, 15 Aug 2022 12:18:55 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 88FBD60A52; Mon, 15 Aug 2022 19:18:54 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7B6D9C433D6; Mon, 15 Aug 2022 19:18:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1660591134; bh=YtyBoUIxn3QX5ba5O8rN16+/jqTBV3/xRXxJO7Nnqhg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=SAULSfqrSXkx8TWS2BnJLHfBftZ6iy2URkyrPr3VE3R3bh1H8iFoQC1nYZtCd4Ugj ZwQLCpIJVkmIM8XvQCrTOgn6YPxqFowCpyHpRtl7bdfZQV4wUQh5+Foe+0qeWhkrzf 2M4Fy1SLMrSRL82gNhj+h8caP5IP0W5yHe/NIFNk= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Andrii Nakryiko , Alexei Starovoitov , Sasha Levin Subject: [PATCH 5.18 0482/1095] bpf: fix potential 32-bit overflow when accessing ARRAY map element Date: Mon, 15 Aug 2022 19:58:01 +0200 Message-Id: <20220815180449.508630492@linuxfoundation.org> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20220815180429.240518113@linuxfoundation.org> References: <20220815180429.240518113@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Andrii Nakryiko [ Upstream commit 87ac0d600943994444e24382a87aa19acc4cd3d4 ] If BPF array map is bigger than 4GB, element pointer calculation can overflow because both index and elem_size are u32. Fix this everywhere by forcing 64-bit multiplication. Extract this formula into separate small helper and use it consistently in various places. Speculative-preventing formula utilizing index_mask trick is left as is, but explicit u64 casts are added in both places. Fixes: c85d69135a91 ("bpf: move memory size checks to bpf_map_charge_init()= ") Signed-off-by: Andrii Nakryiko Link: https://lore.kernel.org/r/20220715053146.1291891-2-andrii@kernel.org Signed-off-by: Alexei Starovoitov Signed-off-by: Sasha Levin --- kernel/bpf/arraymap.c | 20 ++++++++++++-------- 1 file changed, 12 insertions(+), 8 deletions(-) diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c index c3de63ce574e..d015fce67865 100644 --- a/kernel/bpf/arraymap.c +++ b/kernel/bpf/arraymap.c @@ -155,6 +155,11 @@ static struct bpf_map *array_map_alloc(union bpf_attr = *attr) return &array->map; } =20 +static void *array_map_elem_ptr(struct bpf_array* array, u32 index) +{ + return array->value + (u64)array->elem_size * index; +} + /* Called from syscall or from eBPF program */ static void *array_map_lookup_elem(struct bpf_map *map, void *key) { @@ -164,7 +169,7 @@ static void *array_map_lookup_elem(struct bpf_map *map,= void *key) if (unlikely(index >=3D array->map.max_entries)) return NULL; =20 - return array->value + array->elem_size * (index & array->index_mask); + return array->value + (u64)array->elem_size * (index & array->index_mask); } =20 static int array_map_direct_value_addr(const struct bpf_map *map, u64 *imm, @@ -324,7 +329,7 @@ static int array_map_update_elem(struct bpf_map *map, v= oid *key, void *value, value, map->value_size); } else { val =3D array->value + - array->elem_size * (index & array->index_mask); + (u64)array->elem_size * (index & array->index_mask); if (map_flags & BPF_F_LOCK) copy_map_value_locked(map, val, value, false); else @@ -393,8 +398,7 @@ static void array_map_free_timers(struct bpf_map *map) return; =20 for (i =3D 0; i < array->map.max_entries; i++) - bpf_timer_cancel_and_free(array->value + array->elem_size * i + - map->timer_off); + bpf_timer_cancel_and_free(array_map_elem_ptr(array, i) + map->timer_off); } =20 /* Called when map->refcnt goes to zero, either from workqueue or from sys= call */ @@ -405,7 +409,7 @@ static void array_map_free(struct bpf_map *map) =20 if (map_value_has_kptrs(map)) { for (i =3D 0; i < array->map.max_entries; i++) - bpf_map_free_kptrs(map, array->value + array->elem_size * i); + bpf_map_free_kptrs(map, array_map_elem_ptr(array, i)); bpf_map_free_kptr_off_tab(map); } =20 @@ -541,7 +545,7 @@ static void *bpf_array_map_seq_start(struct seq_file *s= eq, loff_t *pos) index =3D info->index & array->index_mask; if (info->percpu_value_buf) return array->pptrs[index]; - return array->value + array->elem_size * index; + return array_map_elem_ptr(array, index); } =20 static void *bpf_array_map_seq_next(struct seq_file *seq, void *v, loff_t = *pos) @@ -560,7 +564,7 @@ static void *bpf_array_map_seq_next(struct seq_file *se= q, void *v, loff_t *pos) index =3D info->index & array->index_mask; if (info->percpu_value_buf) return array->pptrs[index]; - return array->value + array->elem_size * index; + return array_map_elem_ptr(array, index); } =20 static int __bpf_array_map_seq_show(struct seq_file *seq, void *v) @@ -675,7 +679,7 @@ static int bpf_for_each_array_elem(struct bpf_map *map,= bpf_callback_t callback_ if (is_percpu) val =3D this_cpu_ptr(array->pptrs[i]); else - val =3D array->value + array->elem_size * i; + val =3D array_map_elem_ptr(array, i); num_elems++; key =3D i; ret =3D callback_fn((u64)(long)map, (u64)(long)&key, --=20 2.35.1