[PATCH mlx5-next 8/8] {net/RDMA}/mlx5: Add LAG demux table API and vport demux rules

Tariq Toukan posted 8 patches 1 month ago
[PATCH mlx5-next 8/8] {net/RDMA}/mlx5: Add LAG demux table API and vport demux rules
Posted by Tariq Toukan 1 month ago
From: Shay Drory <shayd@nvidia.com>

Downstream patches will introduce SW-only LAG (e.g. shared_fdb without
HW LAG). In this mode the firmware cannot create the LAG demux table,
but vport demuxing is still required.

Move LAG demux flow-table ownership to the LAG layer and introduce APIs
to init/cleanup the demux table and add/delete per-vport rules. Adjust
the RDMA driver to use the new APIs.

In this mode, the LAG layer will create a flow group that matches vport
metadata. Vports that are not native to the LAG master eswitch add the
demux rule during IB representor load and remove it on unload.
The demux rule forward traffic from said vports to their native eswitch
manager via a new dest type - MLX5_FLOW_DESTINATION_TYPE_VHCA_RX.

Signed-off-by: Shay Drory <shayd@nvidia.com>
Reviewed-by: Mark Bloch <mbloch@nvidia.com>
Signed-off-by: Tariq Toukan <tariqt@nvidia.com>
---
 drivers/infiniband/hw/mlx5/ib_rep.c           |  20 ++-
 drivers/infiniband/hw/mlx5/main.c             |  21 +--
 drivers/infiniband/hw/mlx5/mlx5_ib.h          |   1 -
 .../net/ethernet/mellanox/mlx5/core/eswitch.h |  12 ++
 .../mellanox/mlx5/core/eswitch_offloads.c     |  83 +++++++++-
 .../net/ethernet/mellanox/mlx5/core/fs_core.c |  10 +-
 .../net/ethernet/mellanox/mlx5/core/lag/lag.c | 152 ++++++++++++++++++
 .../net/ethernet/mellanox/mlx5/core/lag/lag.h |  12 ++
 include/linux/mlx5/fs.h                       |   6 +-
 include/linux/mlx5/lag.h                      |  10 ++
 10 files changed, 300 insertions(+), 27 deletions(-)

diff --git a/drivers/infiniband/hw/mlx5/ib_rep.c b/drivers/infiniband/hw/mlx5/ib_rep.c
index df8f049c5806..abedc5e2f7b7 100644
--- a/drivers/infiniband/hw/mlx5/ib_rep.c
+++ b/drivers/infiniband/hw/mlx5/ib_rep.c
@@ -10,11 +10,13 @@
 
 static int
 mlx5_ib_set_vport_rep(struct mlx5_core_dev *dev,
+		      struct mlx5_core_dev *rep_dev,
 		      struct mlx5_eswitch_rep *rep,
 		      int vport_index)
 {
 	struct mlx5_ib_dev *ibdev;
 	struct net_device *ndev;
+	int ret;
 
 	ibdev = mlx5_eswitch_uplink_get_proto_dev(dev->priv.eswitch, REP_IB);
 	if (!ibdev)
@@ -24,7 +26,17 @@ mlx5_ib_set_vport_rep(struct mlx5_core_dev *dev,
 	rep->rep_data[REP_IB].priv = ibdev;
 	ndev = mlx5_ib_get_rep_netdev(rep->esw, rep->vport);
 
-	return ib_device_set_netdev(&ibdev->ib_dev, ndev, vport_index + 1);
+	ret = ib_device_set_netdev(&ibdev->ib_dev, ndev, vport_index + 1);
+	if (ret)
+		return ret;
+
+	/* Only Vports that are not native to the LAG master eswitch need to add
+	 * demux rule.
+	 */
+	if (mlx5_eswitch_get_total_vports(dev) >= vport_index)
+		return 0;
+
+	return mlx5_lag_demux_rule_add(rep_dev, rep->vport, vport_index);
 }
 
 static void mlx5_ib_register_peer_vport_reps(struct mlx5_core_dev *mdev);
@@ -131,7 +143,7 @@ mlx5_ib_vport_rep_load(struct mlx5_core_dev *dev, struct mlx5_eswitch_rep *rep)
 
 				if (mlx5_lag_is_master(peer_dev))
 					lag_master = peer_dev;
-				else if (!mlx5_lag_is_mpesw(dev))
+				else if (!mlx5_lag_is_mpesw(peer_dev))
 				/* Only 1 ib port is the representor for all uplinks */
 					peer_n_ports--;
 
@@ -145,7 +157,7 @@ mlx5_ib_vport_rep_load(struct mlx5_core_dev *dev, struct mlx5_eswitch_rep *rep)
 	if (rep->vport == MLX5_VPORT_UPLINK && !new_uplink)
 		profile = &raw_eth_profile;
 	else
-		return mlx5_ib_set_vport_rep(lag_master, rep, vport_index);
+		return mlx5_ib_set_vport_rep(lag_master, dev, rep, vport_index);
 
 	if (mlx5_lag_is_shared_fdb(dev)) {
 		ret = mlx5_ib_take_transport(lag_master);
@@ -233,6 +245,8 @@ mlx5_ib_vport_rep_unload(struct mlx5_eswitch_rep *rep)
 		vport_index = i;
 	}
 
+	mlx5_lag_demux_rule_del(mdev, vport_index);
+
 	port = &dev->port[vport_index];
 
 	ib_device_set_netdev(&dev->ib_dev, NULL, vport_index + 1);
diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
index 635002e684a5..9fb0629978bd 100644
--- a/drivers/infiniband/hw/mlx5/main.c
+++ b/drivers/infiniband/hw/mlx5/main.c
@@ -26,6 +26,7 @@
 #include <linux/mlx5/fs.h>
 #include <linux/mlx5/eswitch.h>
 #include <linux/mlx5/driver.h>
+#include <linux/mlx5/lag.h>
 #include <linux/list.h>
 #include <rdma/ib_smi.h>
 #include <rdma/ib_umem_odp.h>
@@ -3678,12 +3679,12 @@ static void mlx5e_lag_event_unregister(struct mlx5_ib_dev *dev)
 
 static int mlx5_eth_lag_init(struct mlx5_ib_dev *dev)
 {
+	struct mlx5_flow_table_attr ft_attr = {};
 	struct mlx5_core_dev *mdev = dev->mdev;
-	struct mlx5_flow_namespace *ns = mlx5_get_flow_namespace(mdev,
-								 MLX5_FLOW_NAMESPACE_LAG);
-	struct mlx5_flow_table *ft;
+	struct mlx5_flow_namespace *ns;
 	int err;
 
+	ns = mlx5_get_flow_namespace(mdev, MLX5_FLOW_NAMESPACE_LAG);
 	if (!ns || !mlx5_lag_is_active(mdev))
 		return 0;
 
@@ -3691,14 +3692,15 @@ static int mlx5_eth_lag_init(struct mlx5_ib_dev *dev)
 	if (err)
 		return err;
 
-	ft = mlx5_create_lag_demux_flow_table(ns, 0, 0);
-	if (IS_ERR(ft)) {
-		err = PTR_ERR(ft);
+	ft_attr.level = 0;
+	ft_attr.prio = 0;
+	ft_attr.max_fte = dev->num_ports;
+
+	err = mlx5_lag_demux_init(mdev, &ft_attr);
+	if (err)
 		goto err_destroy_vport_lag;
-	}
 
 	mlx5e_lag_event_register(dev);
-	dev->flow_db->lag_demux_ft = ft;
 	dev->lag_ports = mlx5_lag_get_num_ports(mdev);
 	dev->lag_active = true;
 	return 0;
@@ -3716,8 +3718,7 @@ static void mlx5_eth_lag_cleanup(struct mlx5_ib_dev *dev)
 		dev->lag_active = false;
 
 		mlx5e_lag_event_unregister(dev);
-		mlx5_destroy_flow_table(dev->flow_db->lag_demux_ft);
-		dev->flow_db->lag_demux_ft = NULL;
+		mlx5_lag_demux_cleanup(mdev);
 
 		mlx5_cmd_destroy_vport_lag(mdev);
 	}
diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h
index 4f4114d95130..3fc31415e107 100644
--- a/drivers/infiniband/hw/mlx5/mlx5_ib.h
+++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h
@@ -306,7 +306,6 @@ struct mlx5_ib_flow_db {
 	struct mlx5_ib_flow_prio	rdma_rx[MLX5_IB_NUM_FLOW_FT];
 	struct mlx5_ib_flow_prio	rdma_tx[MLX5_IB_NUM_FLOW_FT];
 	struct mlx5_ib_flow_prio	opfcs[MLX5_IB_OPCOUNTER_MAX];
-	struct mlx5_flow_table		*lag_demux_ft;
 	struct mlx5_ib_flow_prio        *rdma_transport_rx[MLX5_RDMA_TRANSPORT_BYPASS_PRIO];
 	struct mlx5_ib_flow_prio        *rdma_transport_tx[MLX5_RDMA_TRANSPORT_BYPASS_PRIO];
 	/* Protect flow steering bypass flow tables
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
index 96309a732d50..9b729789537f 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
@@ -940,6 +940,12 @@ int mlx5_esw_ipsec_vf_packet_offload_supported(struct mlx5_core_dev *dev,
 					       u16 vport_num);
 bool mlx5_esw_host_functions_enabled(const struct mlx5_core_dev *dev);
 void mlx5_eswitch_safe_aux_devs_remove(struct mlx5_core_dev *dev);
+struct mlx5_flow_group *
+mlx5_esw_lag_demux_fg_create(struct mlx5_eswitch *esw,
+			     struct mlx5_flow_table *ft);
+struct mlx5_flow_handle *
+mlx5_esw_lag_demux_rule_create(struct mlx5_eswitch *esw, u16 vport_num,
+			       struct mlx5_flow_table *lag_ft);
 #else  /* CONFIG_MLX5_ESWITCH */
 /* eswitch API stubs */
 static inline int  mlx5_eswitch_init(struct mlx5_core_dev *dev) { return 0; }
@@ -1025,6 +1031,12 @@ mlx5_esw_vport_vhca_id(struct mlx5_eswitch *esw, u16 vportn, u16 *vhca_id)
 
 static inline void
 mlx5_eswitch_safe_aux_devs_remove(struct mlx5_core_dev *dev) {}
+static inline struct mlx5_flow_handle *
+mlx5_esw_lag_demux_rule_create(struct mlx5_eswitch *esw, u16 vport_num,
+			       struct mlx5_flow_table *lag_ft)
+{
+	return ERR_PTR(-EOPNOTSUPP);
+}
 
 #endif /* CONFIG_MLX5_ESWITCH */
 
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
index 90e6f97bdf4a..0d907fb7f290 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
@@ -1459,6 +1459,83 @@ esw_add_restore_rule(struct mlx5_eswitch *esw, u32 tag)
 	return flow_rule;
 }
 
+struct mlx5_flow_group *
+mlx5_esw_lag_demux_fg_create(struct mlx5_eswitch *esw,
+			     struct mlx5_flow_table *ft)
+{
+	int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in);
+	struct mlx5_flow_group *fg;
+	void *match_criteria;
+	void *flow_group_in;
+
+	if (!mlx5_eswitch_vport_match_metadata_enabled(esw))
+		return ERR_PTR(-EOPNOTSUPP);
+
+	if (IS_ERR(ft))
+		return ERR_CAST(ft);
+
+	flow_group_in = kvzalloc(inlen, GFP_KERNEL);
+	if (!flow_group_in)
+		return ERR_PTR(-ENOMEM);
+
+	match_criteria = MLX5_ADDR_OF(create_flow_group_in, flow_group_in,
+				      match_criteria);
+	MLX5_SET(create_flow_group_in, flow_group_in, match_criteria_enable,
+		 MLX5_MATCH_MISC_PARAMETERS_2);
+	MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, 0);
+	MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index,
+		 ft->max_fte - 1);
+
+	MLX5_SET(fte_match_param, match_criteria,
+		 misc_parameters_2.metadata_reg_c_0,
+		 mlx5_eswitch_get_vport_metadata_mask());
+
+	fg = mlx5_create_flow_group(ft, flow_group_in);
+	kvfree(flow_group_in);
+	if (IS_ERR(fg))
+		esw_warn(esw->dev, "Can't create LAG demux flow group\n");
+
+	return fg;
+}
+
+struct mlx5_flow_handle *
+mlx5_esw_lag_demux_rule_create(struct mlx5_eswitch *esw, u16 vport_num,
+			       struct mlx5_flow_table *lag_ft)
+{
+	struct mlx5_flow_spec *spec = kvzalloc(sizeof(*spec), GFP_KERNEL);
+	struct mlx5_flow_destination dest = {};
+	struct mlx5_flow_act flow_act = {};
+	struct mlx5_flow_handle *ret;
+	void *misc;
+
+	if (!spec)
+		return ERR_PTR(-ENOMEM);
+
+	if (!mlx5_eswitch_vport_match_metadata_enabled(esw)) {
+		kfree(spec);
+		return ERR_PTR(-EOPNOTSUPP);
+	}
+
+	misc = MLX5_ADDR_OF(fte_match_param, spec->match_criteria,
+			    misc_parameters_2);
+	MLX5_SET(fte_match_set_misc2, misc, metadata_reg_c_0,
+		 mlx5_eswitch_get_vport_metadata_mask());
+	spec->match_criteria_enable = MLX5_MATCH_MISC_PARAMETERS_2;
+
+	misc = MLX5_ADDR_OF(fte_match_param, spec->match_value,
+			    misc_parameters_2);
+	MLX5_SET(fte_match_set_misc2, misc, metadata_reg_c_0,
+		 mlx5_eswitch_get_vport_metadata_for_match(esw, vport_num));
+
+	flow_act.action = MLX5_FLOW_CONTEXT_ACTION_FWD_DEST;
+	dest.type = MLX5_FLOW_DESTINATION_TYPE_VHCA_RX;
+	dest.vhca.id = MLX5_CAP_GEN(esw->dev, vhca_id);
+
+	ret = mlx5_add_flow_rules(lag_ft, spec, &flow_act, &dest, 1);
+	kfree(spec);
+	return ret;
+}
+
 #define MAX_PF_SQ 256
 #define MAX_SQ_NVPORTS 32
 
@@ -2047,7 +2124,8 @@ static int esw_create_vport_rx_group(struct mlx5_eswitch *esw)
 
 	if (IS_ERR(g)) {
 		err = PTR_ERR(g);
-		mlx5_core_warn(esw->dev, "Failed to create vport rx group err %d\n", err);
+		esw_warn(esw->dev, "Failed to create vport rx group err %d\n",
+			 err);
 		goto out;
 	}
 
@@ -2092,7 +2170,8 @@ static int esw_create_vport_rx_drop_group(struct mlx5_eswitch *esw)
 
 	if (IS_ERR(g)) {
 		err = PTR_ERR(g);
-		mlx5_core_warn(esw->dev, "Failed to create vport rx drop group err %d\n", err);
+		esw_warn(esw->dev,
+			 "Failed to create vport rx drop group err %d\n", err);
 		goto out;
 	}
 
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
index 003d211306a7..61a6ba1e49dd 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
@@ -1438,15 +1438,9 @@ mlx5_create_vport_flow_table(struct mlx5_flow_namespace *ns,
 
 struct mlx5_flow_table*
 mlx5_create_lag_demux_flow_table(struct mlx5_flow_namespace *ns,
-				 int prio, u32 level)
+				 struct mlx5_flow_table_attr *ft_attr)
 {
-	struct mlx5_flow_table_attr ft_attr = {};
-
-	ft_attr.level = level;
-	ft_attr.prio  = prio;
-	ft_attr.max_fte = 1;
-
-	return __mlx5_create_flow_table(ns, &ft_attr, FS_FT_OP_MOD_LAG_DEMUX, 0);
+	return __mlx5_create_flow_table(ns, ft_attr, FS_FT_OP_MOD_LAG_DEMUX, 0);
 }
 EXPORT_SYMBOL(mlx5_create_lag_demux_flow_table);
 
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c b/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c
index 51ec8f61ecbb..449e4bd86c06 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c
@@ -1471,6 +1471,158 @@ struct mlx5_devcom_comp_dev *mlx5_lag_get_devcom_comp(struct mlx5_lag *ldev)
 	return devcom;
 }
 
+static int mlx5_lag_demux_ft_fg_init(struct mlx5_core_dev *dev,
+				     struct mlx5_flow_table_attr *ft_attr,
+				     struct mlx5_lag *ldev)
+{
+#ifdef CONFIG_MLX5_ESWITCH
+	struct mlx5_flow_namespace *ns;
+	struct mlx5_flow_group *fg;
+	int err;
+
+	ns = mlx5_get_flow_namespace(dev, MLX5_FLOW_NAMESPACE_LAG);
+	if (!ns)
+		return 0;
+
+	ldev->lag_demux_ft = mlx5_create_flow_table(ns, ft_attr);
+	if (IS_ERR(ldev->lag_demux_ft))
+		return PTR_ERR(ldev->lag_demux_ft);
+
+	fg = mlx5_esw_lag_demux_fg_create(dev->priv.eswitch,
+					  ldev->lag_demux_ft);
+	if (IS_ERR(fg)) {
+		err = PTR_ERR(fg);
+		mlx5_destroy_flow_table(ldev->lag_demux_ft);
+		ldev->lag_demux_ft = NULL;
+		return err;
+	}
+
+	ldev->lag_demux_fg = fg;
+	return 0;
+#else
+	return -EOPNOTSUPP;
+#endif
+}
+
+static int mlx5_lag_demux_fw_init(struct mlx5_core_dev *dev,
+				  struct mlx5_flow_table_attr *ft_attr,
+				  struct mlx5_lag *ldev)
+{
+	struct mlx5_flow_namespace *ns;
+	int err;
+
+	ns = mlx5_get_flow_namespace(dev, MLX5_FLOW_NAMESPACE_LAG);
+	if (!ns)
+		return 0;
+
+	ldev->lag_demux_fg = NULL;
+	ft_attr->max_fte = 1;
+	ldev->lag_demux_ft = mlx5_create_lag_demux_flow_table(ns, ft_attr);
+	if (IS_ERR(ldev->lag_demux_ft)) {
+		err = PTR_ERR(ldev->lag_demux_ft);
+		ldev->lag_demux_ft = NULL;
+		return err;
+	}
+
+	return 0;
+}
+
+int mlx5_lag_demux_init(struct mlx5_core_dev *dev,
+			struct mlx5_flow_table_attr *ft_attr)
+{
+	struct mlx5_lag *ldev;
+
+	if (!ft_attr)
+		return -EINVAL;
+
+	ldev = mlx5_lag_dev(dev);
+	if (!ldev)
+		return -ENODEV;
+
+	xa_init(&ldev->lag_demux_rules);
+
+	if (mlx5_get_sd(dev))
+		return mlx5_lag_demux_ft_fg_init(dev, ft_attr, ldev);
+
+	return mlx5_lag_demux_fw_init(dev, ft_attr, ldev);
+}
+EXPORT_SYMBOL(mlx5_lag_demux_init);
+
+void mlx5_lag_demux_cleanup(struct mlx5_core_dev *dev)
+{
+	struct mlx5_flow_handle *rule;
+	struct mlx5_lag *ldev;
+	unsigned long vport_num;
+
+	ldev = mlx5_lag_dev(dev);
+	if (!ldev)
+		return;
+
+	xa_for_each(&ldev->lag_demux_rules, vport_num, rule)
+		mlx5_del_flow_rules(rule);
+	xa_destroy(&ldev->lag_demux_rules);
+
+	if (ldev->lag_demux_fg)
+		mlx5_destroy_flow_group(ldev->lag_demux_fg);
+	if (ldev->lag_demux_ft)
+		mlx5_destroy_flow_table(ldev->lag_demux_ft);
+	ldev->lag_demux_fg = NULL;
+	ldev->lag_demux_ft = NULL;
+}
+EXPORT_SYMBOL(mlx5_lag_demux_cleanup);
+
+int mlx5_lag_demux_rule_add(struct mlx5_core_dev *vport_dev, u16 vport_num,
+			    int index)
+{
+	struct mlx5_flow_handle *rule;
+	struct mlx5_lag *ldev;
+	int err;
+
+	ldev = mlx5_lag_dev(vport_dev);
+	if (!ldev || !ldev->lag_demux_fg)
+		return 0;
+
+	if (xa_load(&ldev->lag_demux_rules, index))
+		return 0;
+
+	rule = mlx5_esw_lag_demux_rule_create(vport_dev->priv.eswitch,
+					      vport_num, ldev->lag_demux_ft);
+	if (IS_ERR(rule)) {
+		err = PTR_ERR(rule);
+		mlx5_core_warn(vport_dev,
+			       "Failed to create LAG demux rule for vport %u, err %d\n",
+			       vport_num, err);
+		return err;
+	}
+
+	err = xa_err(xa_store(&ldev->lag_demux_rules, index, rule,
+			      GFP_KERNEL));
+	if (err) {
+		mlx5_del_flow_rules(rule);
+		mlx5_core_warn(vport_dev,
+			       "Failed to store LAG demux rule for vport %u, err %d\n",
+			       vport_num, err);
+	}
+
+	return err;
+}
+EXPORT_SYMBOL(mlx5_lag_demux_rule_add);
+
+void mlx5_lag_demux_rule_del(struct mlx5_core_dev *dev, int index)
+{
+	struct mlx5_flow_handle *rule;
+	struct mlx5_lag *ldev;
+
+	ldev = mlx5_lag_dev(dev);
+	if (!ldev || !ldev->lag_demux_fg)
+		return;
+
+	rule = xa_erase(&ldev->lag_demux_rules, index);
+	if (rule)
+		mlx5_del_flow_rules(rule);
+}
+EXPORT_SYMBOL(mlx5_lag_demux_rule_del);
+
 static void mlx5_queue_bond_work(struct mlx5_lag *ldev, unsigned long delay)
 {
 	queue_delayed_work(ldev->wq, &ldev->bond_work, delay);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.h b/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.h
index 30cbd61768f8..6c911374f409 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.h
@@ -5,6 +5,9 @@
 #define __MLX5_LAG_H__
 
 #include <linux/debugfs.h>
+#include <linux/errno.h>
+#include <linux/xarray.h>
+#include <linux/mlx5/fs.h>
 
 #define MLX5_LAG_MAX_HASH_BUCKETS 16
 /* XArray mark for the LAG master device
@@ -83,6 +86,9 @@ struct mlx5_lag {
 	/* Protect lag fields/state changes */
 	struct mutex		  lock;
 	struct lag_mpesw	  lag_mpesw;
+	struct mlx5_flow_table   *lag_demux_ft;
+	struct mlx5_flow_group   *lag_demux_fg;
+	struct xarray		  lag_demux_rules;
 };
 
 static inline struct mlx5_lag *
@@ -133,6 +139,12 @@ mlx5_lag_is_ready(struct mlx5_lag *ldev)
 
 bool mlx5_lag_shared_fdb_supported(struct mlx5_lag *ldev);
 bool mlx5_lag_check_prereq(struct mlx5_lag *ldev);
+int mlx5_lag_demux_init(struct mlx5_core_dev *dev,
+			struct mlx5_flow_table_attr *ft_attr);
+void mlx5_lag_demux_cleanup(struct mlx5_core_dev *dev);
+int mlx5_lag_demux_rule_add(struct mlx5_core_dev *dev, u16 vport_num,
+			    int vport_index);
+void mlx5_lag_demux_rule_del(struct mlx5_core_dev *dev, int vport_index);
 void mlx5_modify_lag(struct mlx5_lag *ldev,
 		     struct lag_tracker *tracker);
 int mlx5_activate_lag(struct mlx5_lag *ldev,
diff --git a/include/linux/mlx5/fs.h b/include/linux/mlx5/fs.h
index 02064424e868..d8f3b7ef319e 100644
--- a/include/linux/mlx5/fs.h
+++ b/include/linux/mlx5/fs.h
@@ -252,9 +252,9 @@ mlx5_create_auto_grouped_flow_table(struct mlx5_flow_namespace *ns,
 struct mlx5_flow_table *
 mlx5_create_vport_flow_table(struct mlx5_flow_namespace *ns,
 			     struct mlx5_flow_table_attr *ft_attr, u16 vport);
-struct mlx5_flow_table *mlx5_create_lag_demux_flow_table(
-					       struct mlx5_flow_namespace *ns,
-					       int prio, u32 level);
+struct mlx5_flow_table *
+mlx5_create_lag_demux_flow_table(struct mlx5_flow_namespace *ns,
+				 struct mlx5_flow_table_attr *ft_attr);
 int mlx5_destroy_flow_table(struct mlx5_flow_table *ft);
 
 /* inbox should be set with the following values:
diff --git a/include/linux/mlx5/lag.h b/include/linux/mlx5/lag.h
index d370dfd19055..ab9f754664e5 100644
--- a/include/linux/mlx5/lag.h
+++ b/include/linux/mlx5/lag.h
@@ -4,8 +4,18 @@
 #ifndef __MLX5_LAG_API_H__
 #define __MLX5_LAG_API_H__
 
+#include <linux/types.h>
+
 struct mlx5_core_dev;
+struct mlx5_flow_table;
+struct mlx5_flow_table_attr;
 
+int mlx5_lag_demux_init(struct mlx5_core_dev *dev,
+			struct mlx5_flow_table_attr *ft_attr);
+void mlx5_lag_demux_cleanup(struct mlx5_core_dev *dev);
+int mlx5_lag_demux_rule_add(struct mlx5_core_dev *dev, u16 vport_num,
+			    int vport_index);
+void mlx5_lag_demux_rule_del(struct mlx5_core_dev *dev, int vport_index);
 int mlx5_lag_get_dev_seq(struct mlx5_core_dev *dev);
 
 #endif /* __MLX5_LAG_API_H__ */
-- 
2.44.0
Re: [PATCH mlx5-next 8/8] {net/RDMA}/mlx5: Add LAG demux table API and vport demux rules
Posted by Jakub Kicinski 1 month ago
On Sun, 8 Mar 2026 08:55:59 +0200 Tariq Toukan wrote:
> +struct mlx5_flow_handle *
> +mlx5_esw_lag_demux_rule_create(struct mlx5_eswitch *esw, u16 vport_num,
> +			       struct mlx5_flow_table *lag_ft)
> +{
> +	struct mlx5_flow_spec *spec = kvzalloc(sizeof(*spec), GFP_KERNEL);
> +	struct mlx5_flow_destination dest = {};
> +	struct mlx5_flow_act flow_act = {};
> +	struct mlx5_flow_handle *ret;
> +	void *misc;
> +
> +	if (!spec)
> +		return ERR_PTR(-ENOMEM);
> +
> +	if (!mlx5_eswitch_vport_match_metadata_enabled(esw)) {
> +		kfree(spec);
> +		return ERR_PTR(-EOPNOTSUPP);
> +	}
> +
> +	misc = MLX5_ADDR_OF(fte_match_param, spec->match_criteria,
> +			    misc_parameters_2);
> +	MLX5_SET(fte_match_set_misc2, misc, metadata_reg_c_0,
> +		 mlx5_eswitch_get_vport_metadata_mask());
> +	spec->match_criteria_enable = MLX5_MATCH_MISC_PARAMETERS_2;
> +
> +	misc = MLX5_ADDR_OF(fte_match_param, spec->match_value,
> +			    misc_parameters_2);
> +	MLX5_SET(fte_match_set_misc2, misc, metadata_reg_c_0,
> +		 mlx5_eswitch_get_vport_metadata_for_match(esw, vport_num));
> +
> +	flow_act.action = MLX5_FLOW_CONTEXT_ACTION_FWD_DEST;
> +	dest.type = MLX5_FLOW_DESTINATION_TYPE_VHCA_RX;
> +	dest.vhca.id = MLX5_CAP_GEN(esw->dev, vhca_id);
> +
> +	ret = mlx5_add_flow_rules(lag_ft, spec, &flow_act, &dest, 1);
> +	kfree(spec);
> +	return ret;
> +}

drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c:1512:12-13: WARNING kvmalloc is used to allocate this memory at line 1502
drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c:1532:11-12: WARNING kvmalloc is used to allocate this memory at line 1502
-- 
pw-bot: cr
Re: [PATCH mlx5-next 8/8] {net/RDMA}/mlx5: Add LAG demux table API and vport demux rules
Posted by Mark Bloch 1 month ago

On 08/03/2026 17:52, Jakub Kicinski wrote:
> On Sun, 8 Mar 2026 08:55:59 +0200 Tariq Toukan wrote:
>> +struct mlx5_flow_handle *
>> +mlx5_esw_lag_demux_rule_create(struct mlx5_eswitch *esw, u16 vport_num,
>> +			       struct mlx5_flow_table *lag_ft)
>> +{
>> +	struct mlx5_flow_spec *spec = kvzalloc(sizeof(*spec), GFP_KERNEL);
>> +	struct mlx5_flow_destination dest = {};
>> +	struct mlx5_flow_act flow_act = {};
>> +	struct mlx5_flow_handle *ret;
>> +	void *misc;
>> +
>> +	if (!spec)
>> +		return ERR_PTR(-ENOMEM);
>> +
>> +	if (!mlx5_eswitch_vport_match_metadata_enabled(esw)) {
>> +		kfree(spec);
>> +		return ERR_PTR(-EOPNOTSUPP);
>> +	}
>> +
>> +	misc = MLX5_ADDR_OF(fte_match_param, spec->match_criteria,
>> +			    misc_parameters_2);
>> +	MLX5_SET(fte_match_set_misc2, misc, metadata_reg_c_0,
>> +		 mlx5_eswitch_get_vport_metadata_mask());
>> +	spec->match_criteria_enable = MLX5_MATCH_MISC_PARAMETERS_2;
>> +
>> +	misc = MLX5_ADDR_OF(fte_match_param, spec->match_value,
>> +			    misc_parameters_2);
>> +	MLX5_SET(fte_match_set_misc2, misc, metadata_reg_c_0,
>> +		 mlx5_eswitch_get_vport_metadata_for_match(esw, vport_num));
>> +
>> +	flow_act.action = MLX5_FLOW_CONTEXT_ACTION_FWD_DEST;
>> +	dest.type = MLX5_FLOW_DESTINATION_TYPE_VHCA_RX;
>> +	dest.vhca.id = MLX5_CAP_GEN(esw->dev, vhca_id);
>> +
>> +	ret = mlx5_add_flow_rules(lag_ft, spec, &flow_act, &dest, 1);
>> +	kfree(spec);
>> +	return ret;
>> +}
> 
> drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c:1512:12-13: WARNING kvmalloc is used to allocate this memory at line 1502
> drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c:1532:11-12: WARNING kvmalloc is used to allocate this memory at line 1502
Hi Jakub,

Thanks for catching this. We’ll address it.

Also, I saw IA flagged issues con
“net/mlx5: LAG, replace pf array with xarray”.
Just for context, lag_lock is already a known problematic
area for us, and we do have plans to remove it. I ran the
review prompts locally in ORC mode, so I assume I saw the
same comments as NIPA.

So the issue raised there is not really a new one. lag_lock
already has some known issues today, but we do not expect to
hit this particular case in practice, since by the time
execution reaches mdev removal, the LAG should already have
been destroyed and the netdevs already removed for the driver
internal structures.

Mark
Re: [PATCH mlx5-next 8/8] {net/RDMA}/mlx5: Add LAG demux table API and vport demux rules
Posted by Jakub Kicinski 1 month ago
On Sun, 8 Mar 2026 20:34:26 +0200 Mark Bloch wrote:
> Thanks for catching this. We’ll address it.
> 
> Also, I saw IA flagged issues con
> “net/mlx5: LAG, replace pf array with xarray”.
> Just for context, lag_lock is already a known problematic
> area for us, and we do have plans to remove it. I ran the
> review prompts locally in ORC mode, so I assume I saw the
> same comments as NIPA.
> 
> So the issue raised there is not really a new one. lag_lock
> already has some known issues today, but we do not expect to
> hit this particular case in practice, since by the time
> execution reaches mdev removal, the LAG should already have
> been destroyed and the netdevs already removed for the driver
> internal structures.

Ack, I haven't looked at the AI reivew TBH.
As usual with known AI flags - should the explanation be part 
of the commit message?
Re: [PATCH mlx5-next 8/8] {net/RDMA}/mlx5: Add LAG demux table API and vport demux rules
Posted by Mark Bloch 1 month ago

On 09/03/2026 23:33, Jakub Kicinski wrote:
> On Sun, 8 Mar 2026 20:34:26 +0200 Mark Bloch wrote:
>> Thanks for catching this. We’ll address it.
>>
>> Also, I saw IA flagged issues con
>> “net/mlx5: LAG, replace pf array with xarray”.
>> Just for context, lag_lock is already a known problematic
>> area for us, and we do have plans to remove it. I ran the
>> review prompts locally in ORC mode, so I assume I saw the
>> same comments as NIPA.
>>
>> So the issue raised there is not really a new one. lag_lock
>> already has some known issues today, but we do not expect to
>> hit this particular case in practice, since by the time
>> execution reaches mdev removal, the LAG should already have
>> been destroyed and the netdevs already removed for the driver
>> internal structures.
> 
> Ack, I haven't looked at the AI reivew TBH.
> As usual with known AI flags - should the explanation be part 
> of the commit message?

That's an interesting question.
I'll try to give my 0.02$ about the general case.
Out of curiosity I ran one of our upcoming internal series
through both Mason's prompts with Claude and our internal
AI review tool.

Mason's + Claude reported 3 false positives.

Our internal AI tool also reported 3 false positives (interestingly,
they were different issues) and 1 real issue, which I already knew
about since the author hasn't fixed it yet.

So in theory we could add a note like “AI tools may flag issues
X/Y/Z but those are not valid here”, but in practice it really
depends on which tool is used and how it's configured.

At the moment it seems that netdev/NIPA is using Mason's prompts
with Claude, so if anything that would probably be the default
reference.

The larger question is that running NIPA before submission is
not currently required. Are there any plans to make that part
of the submission expectations, and not just encouraged?

Mark




Re: [PATCH mlx5-next 8/8] {net/RDMA}/mlx5: Add LAG demux table API and vport demux rules
Posted by Jakub Kicinski 4 weeks, 1 day ago
On Tue, 10 Mar 2026 08:05:39 +0200 Mark Bloch wrote:
> On 09/03/2026 23:33, Jakub Kicinski wrote:
> > On Sun, 8 Mar 2026 20:34:26 +0200 Mark Bloch wrote:  
> >> Thanks for catching this. We’ll address it.
> >>
> >> Also, I saw IA flagged issues con
> >> “net/mlx5: LAG, replace pf array with xarray”.
> >> Just for context, lag_lock is already a known problematic
> >> area for us, and we do have plans to remove it. I ran the
> >> review prompts locally in ORC mode, so I assume I saw the
> >> same comments as NIPA.
> >>
> >> So the issue raised there is not really a new one. lag_lock
> >> already has some known issues today, but we do not expect to
> >> hit this particular case in practice, since by the time
> >> execution reaches mdev removal, the LAG should already have
> >> been destroyed and the netdevs already removed for the driver
> >> internal structures.  
> > 
> > Ack, I haven't looked at the AI reivew TBH.
> > As usual with known AI flags - should the explanation be part 
> > of the commit message?  
> 
> That's an interesting question.
> I'll try to give my 0.02$ about the general case.
> Out of curiosity I ran one of our upcoming internal series
> through both Mason's prompts with Claude and our internal
> AI review tool.
> 
> Mason's + Claude reported 3 false positives.
> 
> Our internal AI tool also reported 3 false positives (interestingly,
> they were different issues) and 1 real issue, which I already knew
> about since the author hasn't fixed it yet.
> 
> So in theory we could add a note like “AI tools may flag issues
> X/Y/Z but those are not valid here”, but in practice it really
> depends on which tool is used and how it's configured.
> 
> At the moment it seems that netdev/NIPA is using Mason's prompts
> with Claude, so if anything that would probably be the default
> reference.
> 
> The larger question is that running NIPA before submission is
> not currently required. Are there any plans to make that part
> of the submission expectations, and not just encouraged?

No, no, the process angle is not how I look at this.
We should only add comments to the commit message or code if there's
genuine ambiguity. Basically if someone reading the code may also get
confused there should be an explanation somewhere. We should not be
adding any code or explanations to make tools happy.