From nobody Mon Nov 25 05:51:26 2024 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7FB3C218316; Wed, 30 Oct 2024 16:53:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.10 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730307229; cv=none; b=bMpr8m0V05ifw2P1UxJWpRNaQRdUiaiIMOgTakoc5b5cy+rZ7Vxk+918XHj8YlxdctGhH0tKgF1KX7b5xB6841FRaii4SDDFftH3Ajv3pxu9zSjTR7CqIIFTITQ/0Hj1pugsd/P+79e34eXzLelAJE2/4dkogjDz2BloVxabhHU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730307229; c=relaxed/simple; bh=EN544hgo7168AemxGjLGV+AMYxFaDsrwRksIjeuHUK0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=MS7xaePy02LyxO1hroZlX1ZtWIqNtSX+59UnydYZENot9gHA6oWXzSSzJY0/Pw5vFsHdxGG0K/gH9I0z1SYUag3INCCMFl8Fc2b90O8tuS6/SRLwRu2C4nvuvpY4+t8A8d7yq/s9rLfGOe5A7oiGU3MytDTCiR4JFq3MMyaMCN0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=iR1EiVVG; arc=none smtp.client-ip=192.198.163.10 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="iR1EiVVG" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1730307227; x=1761843227; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=EN544hgo7168AemxGjLGV+AMYxFaDsrwRksIjeuHUK0=; b=iR1EiVVG6eYhNSnfwmGRW626LDErqzbW99TJtu/4I4UnpYfix0HmMeew 83L+jac/kqu8AWlF0H5HTLyLfG2oG/xHBiG1qwEFULm9OIHjFV56Uj/+a qI8Nde8meFWAdC4VKqn9kBN1ISUsx2N6/y0oUrCzYsxSq3nwy7kl6RKYY 27/FwraIPuWoIyVF9EaZWA8PW0ZZIbgoe7maNG/wuS/kHssKZL/wR2Gs/ TAU9wtgTf7zllDGnrUgsQUaslAbEYf85Et6JRZWDqJQpt18xk0dbZX4YZ 0u1N6T9X/UDfF8R0gmW1dzdtTWkiG6k28ONnAx22Kdsn4MW/C1xhs3EUB w==; X-CSE-ConnectionGUID: z0K1IJ/YQMam9Q+8kXywrQ== X-CSE-MsgGUID: CjmIHOm3Qlqh0gFVaoOYqw== X-IronPort-AV: E=McAfee;i="6700,10204,11241"; a="41389631" X-IronPort-AV: E=Sophos;i="6.11,245,1725346800"; d="scan'208";a="41389631" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by fmvoesa104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Oct 2024 09:53:46 -0700 X-CSE-ConnectionGUID: MckSG/NOQLm+hipaEgoO3g== X-CSE-MsgGUID: yB9VN/p2Tu+IqXtF2NtJTw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,245,1725346800"; d="scan'208";a="87524477" Received: from newjersey.igk.intel.com ([10.102.20.203]) by orviesa004.jf.intel.com with ESMTP; 30 Oct 2024 09:53:42 -0700 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , =?UTF-8?q?Toke=20H=C3=B8iland-J=C3=B8rgensen?= , Alexei Starovoitov , Daniel Borkmann , John Fastabend , Andrii Nakryiko , Maciej Fijalkowski , Stanislav Fomichev , Magnus Karlsson , nex.sw.ncis.osdt.itp.upstreaming@intel.com, bpf@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v3 04/18] bpf, xdp: constify some bpf_prog * function arguments Date: Wed, 30 Oct 2024 17:51:47 +0100 Message-ID: <20241030165201.442301-5-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20241030165201.442301-1-aleksander.lobakin@intel.com> References: <20241030165201.442301-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In lots of places, bpf_prog pointer is used only for tracing or other stuff that doesn't modify the structure itself. Same for net_device. Address at least some of them and add `const` attributes there. The object code didn't change, but that may prevent unwanted data modifications and also allow more helpers to have const arguments. Signed-off-by: Alexander Lobakin Reviewed-by: Toke H=C3=B8iland-J=C3=B8rgensen --- include/linux/bpf.h | 12 ++++++------ include/linux/filter.h | 9 +++++---- include/linux/netdevice.h | 6 +++--- include/linux/skbuff.h | 2 +- kernel/bpf/devmap.c | 8 ++++---- net/core/dev.c | 10 +++++----- net/core/filter.c | 29 ++++++++++++++++------------- net/core/skbuff.c | 2 +- 8 files changed, 41 insertions(+), 37 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 19d8ca8ac960..263515478984 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -2534,10 +2534,10 @@ int dev_map_enqueue(struct bpf_dtab_netdev *dst, st= ruct xdp_frame *xdpf, int dev_map_enqueue_multi(struct xdp_frame *xdpf, struct net_device *dev_r= x, struct bpf_map *map, bool exclude_ingress); int dev_map_generic_redirect(struct bpf_dtab_netdev *dst, struct sk_buff *= skb, - struct bpf_prog *xdp_prog); + const struct bpf_prog *xdp_prog); int dev_map_redirect_multi(struct net_device *dev, struct sk_buff *skb, - struct bpf_prog *xdp_prog, struct bpf_map *map, - bool exclude_ingress); + const struct bpf_prog *xdp_prog, + struct bpf_map *map, bool exclude_ingress); =20 void __cpu_map_flush(struct list_head *flush_list); int cpu_map_enqueue(struct bpf_cpu_map_entry *rcpu, struct xdp_frame *xdpf, @@ -2801,15 +2801,15 @@ struct sk_buff; =20 static inline int dev_map_generic_redirect(struct bpf_dtab_netdev *dst, struct sk_buff *skb, - struct bpf_prog *xdp_prog) + const struct bpf_prog *xdp_prog) { return 0; } =20 static inline int dev_map_redirect_multi(struct net_device *dev, struct sk_buff *skb, - struct bpf_prog *xdp_prog, struct bpf_map *map, - bool exclude_ingress) + const struct bpf_prog *xdp_prog, + struct bpf_map *map, bool exclude_ingress) { return 0; } diff --git a/include/linux/filter.h b/include/linux/filter.h index 7d7578a8eac1..ee067ab13272 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -1178,17 +1178,18 @@ static inline int xdp_ok_fwd_dev(const struct net_d= evice *fwd, * This does not appear to be a real limitation for existing software. */ int xdp_do_generic_redirect(struct net_device *dev, struct sk_buff *skb, - struct xdp_buff *xdp, struct bpf_prog *prog); + struct xdp_buff *xdp, const struct bpf_prog *prog); int xdp_do_redirect(struct net_device *dev, struct xdp_buff *xdp, - struct bpf_prog *prog); + const struct bpf_prog *prog); int xdp_do_redirect_frame(struct net_device *dev, struct xdp_buff *xdp, struct xdp_frame *xdpf, - struct bpf_prog *prog); + const struct bpf_prog *prog); void xdp_do_flush(void); =20 -void bpf_warn_invalid_xdp_action(struct net_device *dev, struct bpf_prog *= prog, u32 act); +void bpf_warn_invalid_xdp_action(const struct net_device *dev, + const struct bpf_prog *prog, u32 act); =20 #ifdef CONFIG_INET struct sock *bpf_run_sk_reuseport(struct sock_reuseport *reuse, struct soc= k *sk, diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index 3c552b648b27..201f0c0ec62e 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -3941,9 +3941,9 @@ static inline void dev_consume_skb_any(struct sk_buff= *skb) } =20 u32 bpf_prog_run_generic_xdp(struct sk_buff *skb, struct xdp_buff *xdp, - struct bpf_prog *xdp_prog); -void generic_xdp_tx(struct sk_buff *skb, struct bpf_prog *xdp_prog); -int do_xdp_generic(struct bpf_prog *xdp_prog, struct sk_buff **pskb); + const struct bpf_prog *xdp_prog); +void generic_xdp_tx(struct sk_buff *skb, const struct bpf_prog *xdp_prog); +int do_xdp_generic(const struct bpf_prog *xdp_prog, struct sk_buff **pskb); int netif_rx(struct sk_buff *skb); int __netif_rx(struct sk_buff *skb); =20 diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index f187a2415fb8..c867df5b1051 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -3595,7 +3595,7 @@ static inline netmem_ref skb_frag_netmem(const skb_fr= ag_t *frag) int skb_pp_cow_data(struct page_pool *pool, struct sk_buff **pskb, unsigned int headroom); int skb_cow_data_for_xdp(struct page_pool *pool, struct sk_buff **pskb, - struct bpf_prog *prog); + const struct bpf_prog *prog); =20 /** * skb_frag_address - gets the address of the data contained in a paged fr= agment diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c index 7878be18e9d2..effde52bc857 100644 --- a/kernel/bpf/devmap.c +++ b/kernel/bpf/devmap.c @@ -678,7 +678,7 @@ int dev_map_enqueue_multi(struct xdp_frame *xdpf, struc= t net_device *dev_rx, } =20 int dev_map_generic_redirect(struct bpf_dtab_netdev *dst, struct sk_buff *= skb, - struct bpf_prog *xdp_prog) + const struct bpf_prog *xdp_prog) { int err; =20 @@ -701,7 +701,7 @@ int dev_map_generic_redirect(struct bpf_dtab_netdev *ds= t, struct sk_buff *skb, =20 static int dev_map_redirect_clone(struct bpf_dtab_netdev *dst, struct sk_buff *skb, - struct bpf_prog *xdp_prog) + const struct bpf_prog *xdp_prog) { struct sk_buff *nskb; int err; @@ -720,8 +720,8 @@ static int dev_map_redirect_clone(struct bpf_dtab_netde= v *dst, } =20 int dev_map_redirect_multi(struct net_device *dev, struct sk_buff *skb, - struct bpf_prog *xdp_prog, struct bpf_map *map, - bool exclude_ingress) + const struct bpf_prog *xdp_prog, + struct bpf_map *map, bool exclude_ingress) { struct bpf_dtab *dtab =3D container_of(map, struct bpf_dtab, map); struct bpf_dtab_netdev *dst, *last_dst =3D NULL; diff --git a/net/core/dev.c b/net/core/dev.c index c682173a7642..b857abb5c0e9 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -4927,7 +4927,7 @@ static struct netdev_rx_queue *netif_get_rxqueue(stru= ct sk_buff *skb) } =20 u32 bpf_prog_run_generic_xdp(struct sk_buff *skb, struct xdp_buff *xdp, - struct bpf_prog *xdp_prog) + const struct bpf_prog *xdp_prog) { void *orig_data, *orig_data_end, *hard_start; struct netdev_rx_queue *rxqueue; @@ -5029,7 +5029,7 @@ u32 bpf_prog_run_generic_xdp(struct sk_buff *skb, str= uct xdp_buff *xdp, } =20 static int -netif_skb_check_for_xdp(struct sk_buff **pskb, struct bpf_prog *prog) +netif_skb_check_for_xdp(struct sk_buff **pskb, const struct bpf_prog *prog) { struct sk_buff *skb =3D *pskb; int err, hroom, troom; @@ -5053,7 +5053,7 @@ netif_skb_check_for_xdp(struct sk_buff **pskb, struct= bpf_prog *prog) =20 static u32 netif_receive_generic_xdp(struct sk_buff **pskb, struct xdp_buff *xdp, - struct bpf_prog *xdp_prog) + const struct bpf_prog *xdp_prog) { struct sk_buff *skb =3D *pskb; u32 mac_len, act =3D XDP_DROP; @@ -5106,7 +5106,7 @@ static u32 netif_receive_generic_xdp(struct sk_buff *= *pskb, * and DDOS attacks will be more effective. In-driver-XDP use dedicated TX * queues, so they do not have this starvation issue. */ -void generic_xdp_tx(struct sk_buff *skb, struct bpf_prog *xdp_prog) +void generic_xdp_tx(struct sk_buff *skb, const struct bpf_prog *xdp_prog) { struct net_device *dev =3D skb->dev; struct netdev_queue *txq; @@ -5131,7 +5131,7 @@ void generic_xdp_tx(struct sk_buff *skb, struct bpf_p= rog *xdp_prog) =20 static DEFINE_STATIC_KEY_FALSE(generic_xdp_needed_key); =20 -int do_xdp_generic(struct bpf_prog *xdp_prog, struct sk_buff **pskb) +int do_xdp_generic(const struct bpf_prog *xdp_prog, struct sk_buff **pskb) { struct bpf_net_context __bpf_net_ctx, *bpf_net_ctx; =20 diff --git a/net/core/filter.c b/net/core/filter.c index 58761263176c..7faee6c8f7d9 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -4351,9 +4351,9 @@ u32 xdp_master_redirect(struct xdp_buff *xdp) EXPORT_SYMBOL_GPL(xdp_master_redirect); =20 static inline int __xdp_do_redirect_xsk(struct bpf_redirect_info *ri, - struct net_device *dev, + const struct net_device *dev, struct xdp_buff *xdp, - struct bpf_prog *xdp_prog) + const struct bpf_prog *xdp_prog) { enum bpf_map_type map_type =3D ri->map_type; void *fwd =3D ri->tgt_value; @@ -4374,10 +4374,10 @@ static inline int __xdp_do_redirect_xsk(struct bpf_= redirect_info *ri, return err; } =20 -static __always_inline int __xdp_do_redirect_frame(struct bpf_redirect_inf= o *ri, - struct net_device *dev, - struct xdp_frame *xdpf, - struct bpf_prog *xdp_prog) +static __always_inline int +__xdp_do_redirect_frame(struct bpf_redirect_info *ri, struct net_device *d= ev, + struct xdp_frame *xdpf, + const struct bpf_prog *xdp_prog) { enum bpf_map_type map_type =3D ri->map_type; void *fwd =3D ri->tgt_value; @@ -4446,7 +4446,7 @@ static __always_inline int __xdp_do_redirect_frame(st= ruct bpf_redirect_info *ri, } =20 int xdp_do_redirect(struct net_device *dev, struct xdp_buff *xdp, - struct bpf_prog *xdp_prog) + const struct bpf_prog *xdp_prog) { struct bpf_redirect_info *ri =3D bpf_net_ctx_get_ri(); enum bpf_map_type map_type =3D ri->map_type; @@ -4460,7 +4460,8 @@ int xdp_do_redirect(struct net_device *dev, struct xd= p_buff *xdp, EXPORT_SYMBOL_GPL(xdp_do_redirect); =20 int xdp_do_redirect_frame(struct net_device *dev, struct xdp_buff *xdp, - struct xdp_frame *xdpf, struct bpf_prog *xdp_prog) + struct xdp_frame *xdpf, + const struct bpf_prog *xdp_prog) { struct bpf_redirect_info *ri =3D bpf_net_ctx_get_ri(); enum bpf_map_type map_type =3D ri->map_type; @@ -4475,9 +4476,9 @@ EXPORT_SYMBOL_GPL(xdp_do_redirect_frame); static int xdp_do_generic_redirect_map(struct net_device *dev, struct sk_buff *skb, struct xdp_buff *xdp, - struct bpf_prog *xdp_prog, void *fwd, - enum bpf_map_type map_type, u32 map_id, - u32 flags) + const struct bpf_prog *xdp_prog, + void *fwd, enum bpf_map_type map_type, + u32 map_id, u32 flags) { struct bpf_redirect_info *ri =3D bpf_net_ctx_get_ri(); struct bpf_map *map; @@ -4531,7 +4532,8 @@ static int xdp_do_generic_redirect_map(struct net_dev= ice *dev, } =20 int xdp_do_generic_redirect(struct net_device *dev, struct sk_buff *skb, - struct xdp_buff *xdp, struct bpf_prog *xdp_prog) + struct xdp_buff *xdp, + const struct bpf_prog *xdp_prog) { struct bpf_redirect_info *ri =3D bpf_net_ctx_get_ri(); enum bpf_map_type map_type =3D ri->map_type; @@ -9090,7 +9092,8 @@ static bool xdp_is_valid_access(int off, int size, return __is_valid_xdp_access(off, size); } =20 -void bpf_warn_invalid_xdp_action(struct net_device *dev, struct bpf_prog *= prog, u32 act) +void bpf_warn_invalid_xdp_action(const struct net_device *dev, + const struct bpf_prog *prog, u32 act) { const u32 act_max =3D XDP_REDIRECT; =20 diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 00afeb90c23a..224cfe8b4368 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -1009,7 +1009,7 @@ int skb_pp_cow_data(struct page_pool *pool, struct sk= _buff **pskb, EXPORT_SYMBOL(skb_pp_cow_data); =20 int skb_cow_data_for_xdp(struct page_pool *pool, struct sk_buff **pskb, - struct bpf_prog *prog) + const struct bpf_prog *prog) { if (!prog->aux->xdp_has_frags) return -EINVAL; --=20 2.47.0