From nobody Sun Nov 24 05:20:27 2024 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A43A721831C; Thu, 7 Nov 2024 16:14:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.13 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730996044; cv=none; b=VTnb7sXJD/2E/VwXeAXoIdDuT40N5pzFI8U5kXuAnFSu1vb0FavdHoNXd9KliqWf59TEwxhUt35VU6ihqoRH+u6rKuXNuzOeCwatUpnAULg2CcaZz2m6T/Bn6xUoJ0RbE5n7cdK8zyCM4I883JH3J2TCZdzzx7d0/K5b80ltQeo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730996044; c=relaxed/simple; bh=AfJFNMcUas04sgUA/cbzlvj7FMUvz9x4FoR0A2Rhnxo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=Om9OMQgLc9Lm5F+mTirLO3FPrIf8f0bAYQ4W4v4xuVX6O3bXOQraFTDtwKKA8TKxT7DA7TimxwNxf2KQmZtbtFWCrP8ilUrJR4bLCZIbFfc/yv2D8HAyCY48SqUjsxuVdTCvtjJJZNIQqDISKiafmZrJOLiy+aUzMmI0po2xsms= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=QO7ZvCrR; arc=none smtp.client-ip=198.175.65.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="QO7ZvCrR" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1730996043; x=1762532043; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=AfJFNMcUas04sgUA/cbzlvj7FMUvz9x4FoR0A2Rhnxo=; b=QO7ZvCrR+H3FmbuxpWbLDwxcbJgWpt67w6Fv3PFizMICH43SKglZ18XI eLJu+WfC/F/C2rhajDFyvO/FFHNMkp+ujUiIuDYhRbq4BqPULxuV1X4Fy 9/Uo/rngj+P6aIXNBeGoaRKqMdlrXn1AjVoDXfKCuSDCXtAFlhZv+XgRY afsRPgdR6zOPwIPRBUegXRfF/LVMQ+pQJ8T1GU8Yanj2AHTiBDj4JWVjc IA4BkXLn6Ap7WN42VeFapbPtduhhcmBzevdske+ETWE0e2LACcTSyDITw gRS8WO4LehfDzPBxA7FlkAa82ZHZDmYB69RgD6kC+rbBtXUN6KnLyFbSu g==; X-CSE-ConnectionGUID: ORfk8gdrSu61bkTo/scAMw== X-CSE-MsgGUID: /knE6/tGRbSvZfSHNKWZbA== X-IronPort-AV: E=McAfee;i="6700,10204,11222"; a="41955784" X-IronPort-AV: E=Sophos;i="6.11,199,1725346800"; d="scan'208";a="41955784" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by orvoesa105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Nov 2024 08:14:01 -0800 X-CSE-ConnectionGUID: v9cYsM9LRG+jmP8XpDL12Q== X-CSE-MsgGUID: 44NkP3a4QPG3vu9UR63/mQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,135,1728975600"; d="scan'208";a="90258169" Received: from newjersey.igk.intel.com ([10.102.20.203]) by orviesa004.jf.intel.com with ESMTP; 07 Nov 2024 08:13:57 -0800 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , =?UTF-8?q?Toke=20H=C3=B8iland-J=C3=B8rgensen?= , Alexei Starovoitov , Daniel Borkmann , John Fastabend , Andrii Nakryiko , Maciej Fijalkowski , Stanislav Fomichev , Magnus Karlsson , nex.sw.ncis.osdt.itp.upstreaming@intel.com, bpf@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v4 04/19] bpf, xdp: constify some bpf_prog * function arguments Date: Thu, 7 Nov 2024 17:10:11 +0100 Message-ID: <20241107161026.2903044-5-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20241107161026.2903044-1-aleksander.lobakin@intel.com> References: <20241107161026.2903044-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable In lots of places, bpf_prog pointer is used only for tracing or other stuff that doesn't modify the structure itself. Same for net_device. Address at least some of them and add `const` attributes there. The object code didn't change, but that may prevent unwanted data modifications and also allow more helpers to have const arguments. Reviewed-by: Toke H=C3=B8iland-J=C3=B8rgensen Signed-off-by: Alexander Lobakin --- include/linux/bpf.h | 12 ++++++------ include/linux/filter.h | 9 +++++---- include/linux/netdevice.h | 6 +++--- include/linux/skbuff.h | 2 +- kernel/bpf/devmap.c | 8 ++++---- net/core/dev.c | 10 +++++----- net/core/filter.c | 29 ++++++++++++++++------------- net/core/skbuff.c | 2 +- 8 files changed, 41 insertions(+), 37 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index bdadb0bb6cec..0d537d547dce 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -2542,10 +2542,10 @@ int dev_map_enqueue(struct bpf_dtab_netdev *dst, st= ruct xdp_frame *xdpf, int dev_map_enqueue_multi(struct xdp_frame *xdpf, struct net_device *dev_r= x, struct bpf_map *map, bool exclude_ingress); int dev_map_generic_redirect(struct bpf_dtab_netdev *dst, struct sk_buff *= skb, - struct bpf_prog *xdp_prog); + const struct bpf_prog *xdp_prog); int dev_map_redirect_multi(struct net_device *dev, struct sk_buff *skb, - struct bpf_prog *xdp_prog, struct bpf_map *map, - bool exclude_ingress); + const struct bpf_prog *xdp_prog, + struct bpf_map *map, bool exclude_ingress); =20 void __cpu_map_flush(struct list_head *flush_list); int cpu_map_enqueue(struct bpf_cpu_map_entry *rcpu, struct xdp_frame *xdpf, @@ -2809,15 +2809,15 @@ struct sk_buff; =20 static inline int dev_map_generic_redirect(struct bpf_dtab_netdev *dst, struct sk_buff *skb, - struct bpf_prog *xdp_prog) + const struct bpf_prog *xdp_prog) { return 0; } =20 static inline int dev_map_redirect_multi(struct net_device *dev, struct sk_buff *skb, - struct bpf_prog *xdp_prog, struct bpf_map *map, - bool exclude_ingress) + const struct bpf_prog *xdp_prog, + struct bpf_map *map, bool exclude_ingress) { return 0; } diff --git a/include/linux/filter.h b/include/linux/filter.h index 7d7578a8eac1..ee067ab13272 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -1178,17 +1178,18 @@ static inline int xdp_ok_fwd_dev(const struct net_d= evice *fwd, * This does not appear to be a real limitation for existing software. */ int xdp_do_generic_redirect(struct net_device *dev, struct sk_buff *skb, - struct xdp_buff *xdp, struct bpf_prog *prog); + struct xdp_buff *xdp, const struct bpf_prog *prog); int xdp_do_redirect(struct net_device *dev, struct xdp_buff *xdp, - struct bpf_prog *prog); + const struct bpf_prog *prog); int xdp_do_redirect_frame(struct net_device *dev, struct xdp_buff *xdp, struct xdp_frame *xdpf, - struct bpf_prog *prog); + const struct bpf_prog *prog); void xdp_do_flush(void); =20 -void bpf_warn_invalid_xdp_action(struct net_device *dev, struct bpf_prog *= prog, u32 act); +void bpf_warn_invalid_xdp_action(const struct net_device *dev, + const struct bpf_prog *prog, u32 act); =20 #ifdef CONFIG_INET struct sock *bpf_run_sk_reuseport(struct sock_reuseport *reuse, struct soc= k *sk, diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index 3c552b648b27..201f0c0ec62e 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -3941,9 +3941,9 @@ static inline void dev_consume_skb_any(struct sk_buff= *skb) } =20 u32 bpf_prog_run_generic_xdp(struct sk_buff *skb, struct xdp_buff *xdp, - struct bpf_prog *xdp_prog); -void generic_xdp_tx(struct sk_buff *skb, struct bpf_prog *xdp_prog); -int do_xdp_generic(struct bpf_prog *xdp_prog, struct sk_buff **pskb); + const struct bpf_prog *xdp_prog); +void generic_xdp_tx(struct sk_buff *skb, const struct bpf_prog *xdp_prog); +int do_xdp_generic(const struct bpf_prog *xdp_prog, struct sk_buff **pskb); int netif_rx(struct sk_buff *skb); int __netif_rx(struct sk_buff *skb); =20 diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 9e55223ba362..d6920df9b620 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -3618,7 +3618,7 @@ static inline netmem_ref skb_frag_netmem(const skb_fr= ag_t *frag) int skb_pp_cow_data(struct page_pool *pool, struct sk_buff **pskb, unsigned int headroom); int skb_cow_data_for_xdp(struct page_pool *pool, struct sk_buff **pskb, - struct bpf_prog *prog); + const struct bpf_prog *prog); =20 /** * skb_frag_address - gets the address of the data contained in a paged fr= agment diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c index 7878be18e9d2..effde52bc857 100644 --- a/kernel/bpf/devmap.c +++ b/kernel/bpf/devmap.c @@ -678,7 +678,7 @@ int dev_map_enqueue_multi(struct xdp_frame *xdpf, struc= t net_device *dev_rx, } =20 int dev_map_generic_redirect(struct bpf_dtab_netdev *dst, struct sk_buff *= skb, - struct bpf_prog *xdp_prog) + const struct bpf_prog *xdp_prog) { int err; =20 @@ -701,7 +701,7 @@ int dev_map_generic_redirect(struct bpf_dtab_netdev *ds= t, struct sk_buff *skb, =20 static int dev_map_redirect_clone(struct bpf_dtab_netdev *dst, struct sk_buff *skb, - struct bpf_prog *xdp_prog) + const struct bpf_prog *xdp_prog) { struct sk_buff *nskb; int err; @@ -720,8 +720,8 @@ static int dev_map_redirect_clone(struct bpf_dtab_netde= v *dst, } =20 int dev_map_redirect_multi(struct net_device *dev, struct sk_buff *skb, - struct bpf_prog *xdp_prog, struct bpf_map *map, - bool exclude_ingress) + const struct bpf_prog *xdp_prog, + struct bpf_map *map, bool exclude_ingress) { struct bpf_dtab *dtab =3D container_of(map, struct bpf_dtab, map); struct bpf_dtab_netdev *dst, *last_dst =3D NULL; diff --git a/net/core/dev.c b/net/core/dev.c index 6a31152e4606..32dd742450b6 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -4931,7 +4931,7 @@ static struct netdev_rx_queue *netif_get_rxqueue(stru= ct sk_buff *skb) } =20 u32 bpf_prog_run_generic_xdp(struct sk_buff *skb, struct xdp_buff *xdp, - struct bpf_prog *xdp_prog) + const struct bpf_prog *xdp_prog) { void *orig_data, *orig_data_end, *hard_start; struct netdev_rx_queue *rxqueue; @@ -5033,7 +5033,7 @@ u32 bpf_prog_run_generic_xdp(struct sk_buff *skb, str= uct xdp_buff *xdp, } =20 static int -netif_skb_check_for_xdp(struct sk_buff **pskb, struct bpf_prog *prog) +netif_skb_check_for_xdp(struct sk_buff **pskb, const struct bpf_prog *prog) { struct sk_buff *skb =3D *pskb; int err, hroom, troom; @@ -5057,7 +5057,7 @@ netif_skb_check_for_xdp(struct sk_buff **pskb, struct= bpf_prog *prog) =20 static u32 netif_receive_generic_xdp(struct sk_buff **pskb, struct xdp_buff *xdp, - struct bpf_prog *xdp_prog) + const struct bpf_prog *xdp_prog) { struct sk_buff *skb =3D *pskb; u32 mac_len, act =3D XDP_DROP; @@ -5110,7 +5110,7 @@ static u32 netif_receive_generic_xdp(struct sk_buff *= *pskb, * and DDOS attacks will be more effective. In-driver-XDP use dedicated TX * queues, so they do not have this starvation issue. */ -void generic_xdp_tx(struct sk_buff *skb, struct bpf_prog *xdp_prog) +void generic_xdp_tx(struct sk_buff *skb, const struct bpf_prog *xdp_prog) { struct net_device *dev =3D skb->dev; struct netdev_queue *txq; @@ -5135,7 +5135,7 @@ void generic_xdp_tx(struct sk_buff *skb, struct bpf_p= rog *xdp_prog) =20 static DEFINE_STATIC_KEY_FALSE(generic_xdp_needed_key); =20 -int do_xdp_generic(struct bpf_prog *xdp_prog, struct sk_buff **pskb) +int do_xdp_generic(const struct bpf_prog *xdp_prog, struct sk_buff **pskb) { struct bpf_net_context __bpf_net_ctx, *bpf_net_ctx; =20 diff --git a/net/core/filter.c b/net/core/filter.c index 82f92ed0dc72..b40e091a764d 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -4334,9 +4334,9 @@ u32 xdp_master_redirect(struct xdp_buff *xdp) EXPORT_SYMBOL_GPL(xdp_master_redirect); =20 static inline int __xdp_do_redirect_xsk(struct bpf_redirect_info *ri, - struct net_device *dev, + const struct net_device *dev, struct xdp_buff *xdp, - struct bpf_prog *xdp_prog) + const struct bpf_prog *xdp_prog) { enum bpf_map_type map_type =3D ri->map_type; void *fwd =3D ri->tgt_value; @@ -4357,10 +4357,10 @@ static inline int __xdp_do_redirect_xsk(struct bpf_= redirect_info *ri, return err; } =20 -static __always_inline int __xdp_do_redirect_frame(struct bpf_redirect_inf= o *ri, - struct net_device *dev, - struct xdp_frame *xdpf, - struct bpf_prog *xdp_prog) +static __always_inline int +__xdp_do_redirect_frame(struct bpf_redirect_info *ri, struct net_device *d= ev, + struct xdp_frame *xdpf, + const struct bpf_prog *xdp_prog) { enum bpf_map_type map_type =3D ri->map_type; void *fwd =3D ri->tgt_value; @@ -4429,7 +4429,7 @@ static __always_inline int __xdp_do_redirect_frame(st= ruct bpf_redirect_info *ri, } =20 int xdp_do_redirect(struct net_device *dev, struct xdp_buff *xdp, - struct bpf_prog *xdp_prog) + const struct bpf_prog *xdp_prog) { struct bpf_redirect_info *ri =3D bpf_net_ctx_get_ri(); enum bpf_map_type map_type =3D ri->map_type; @@ -4443,7 +4443,8 @@ int xdp_do_redirect(struct net_device *dev, struct xd= p_buff *xdp, EXPORT_SYMBOL_GPL(xdp_do_redirect); =20 int xdp_do_redirect_frame(struct net_device *dev, struct xdp_buff *xdp, - struct xdp_frame *xdpf, struct bpf_prog *xdp_prog) + struct xdp_frame *xdpf, + const struct bpf_prog *xdp_prog) { struct bpf_redirect_info *ri =3D bpf_net_ctx_get_ri(); enum bpf_map_type map_type =3D ri->map_type; @@ -4458,9 +4459,9 @@ EXPORT_SYMBOL_GPL(xdp_do_redirect_frame); static int xdp_do_generic_redirect_map(struct net_device *dev, struct sk_buff *skb, struct xdp_buff *xdp, - struct bpf_prog *xdp_prog, void *fwd, - enum bpf_map_type map_type, u32 map_id, - u32 flags) + const struct bpf_prog *xdp_prog, + void *fwd, enum bpf_map_type map_type, + u32 map_id, u32 flags) { struct bpf_redirect_info *ri =3D bpf_net_ctx_get_ri(); struct bpf_map *map; @@ -4514,7 +4515,8 @@ static int xdp_do_generic_redirect_map(struct net_dev= ice *dev, } =20 int xdp_do_generic_redirect(struct net_device *dev, struct sk_buff *skb, - struct xdp_buff *xdp, struct bpf_prog *xdp_prog) + struct xdp_buff *xdp, + const struct bpf_prog *xdp_prog) { struct bpf_redirect_info *ri =3D bpf_net_ctx_get_ri(); enum bpf_map_type map_type =3D ri->map_type; @@ -9061,7 +9063,8 @@ static bool xdp_is_valid_access(int off, int size, return __is_valid_xdp_access(off, size); } =20 -void bpf_warn_invalid_xdp_action(struct net_device *dev, struct bpf_prog *= prog, u32 act) +void bpf_warn_invalid_xdp_action(const struct net_device *dev, + const struct bpf_prog *prog, u32 act) { const u32 act_max =3D XDP_REDIRECT; =20 diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 00afeb90c23a..224cfe8b4368 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -1009,7 +1009,7 @@ int skb_pp_cow_data(struct page_pool *pool, struct sk= _buff **pskb, EXPORT_SYMBOL(skb_pp_cow_data); =20 int skb_cow_data_for_xdp(struct page_pool *pool, struct sk_buff **pskb, - struct bpf_prog *prog) + const struct bpf_prog *prog) { if (!prog->aux->xdp_has_frags) return -EINVAL; --=20 2.47.0