[PATCH net-next V2 2/3] net/mlx5e: Add device PCIe congestion ethtool stats

Tariq Toukan posted 3 patches 2 months, 4 weeks ago
There is a newer version of this series
[PATCH net-next V2 2/3] net/mlx5e: Add device PCIe congestion ethtool stats
Posted by Tariq Toukan 2 months, 4 weeks ago
From: Dragos Tatulea <dtatulea@nvidia.com>

Implement the PCIe Congestion Event notifier which triggers a work item
to query the PCIe Congestion Event object. The result of the congestion
state is reflected in the new ethtool stats:

* pci_bw_inbound_high: the device has crossed the high threshold for
  inbound PCIe traffic.
* pci_bw_inbound_low: the device has crossed the low threshold for
  inbound PCIe traffic
* pci_bw_outbound_high: the device has crossed the high threshold for
  outbound PCIe traffic.
* pci_bw_outbound_low: the device has crossed the low threshold for
  outbound PCIe traffic

The high and low thresholds are currently configured at 90% and 75%.
These are hysteresis thresholds which help to check if the
PCI bus on the device side is in a congested state.

If low + 1 = high then the device is in a congested state. If low == high
then the device is not in a congested state.

The counters are also documented.

A follow-up patch will make the thresholds configurable.

Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com>
Signed-off-by: Tariq Toukan <tariqt@nvidia.com>
---
 .../ethernet/mellanox/mlx5/counters.rst       |  32 ++++
 .../mellanox/mlx5/core/en/pcie_cong_event.c   | 175 ++++++++++++++++++
 .../ethernet/mellanox/mlx5/core/en_stats.c    |   1 +
 .../ethernet/mellanox/mlx5/core/en_stats.h    |   1 +
 drivers/net/ethernet/mellanox/mlx5/core/eq.c  |   4 +
 5 files changed, 213 insertions(+)

diff --git a/Documentation/networking/device_drivers/ethernet/mellanox/mlx5/counters.rst b/Documentation/networking/device_drivers/ethernet/mellanox/mlx5/counters.rst
index 43d72c8b713b..754c81436408 100644
--- a/Documentation/networking/device_drivers/ethernet/mellanox/mlx5/counters.rst
+++ b/Documentation/networking/device_drivers/ethernet/mellanox/mlx5/counters.rst
@@ -1341,3 +1341,35 @@ Device Counters
      - The number of times the device owned queue had not enough buffers
        allocated.
      - Error
+
+   * - `pci_bw_inbound_high`
+     - The number of times the device crossed the high inbound pcie bandwidth
+       threshold. To be compared to pci_bw_inbound_low to check if the device
+       is in a congested state.
+       If pci_bw_inbound_high == pci_bw_inbound_low then the device is not congested.
+       If pci_bw_inbound_high > pci_bw_inbound_low then the device is congested.
+     - Tnformative
+
+   * - `pci_bw_inbound_low`
+     - The number of times the device crossed the low inbound PCIe bandwidth
+       threshold. To be compared to pci_bw_inbound_high to check if the device
+       is in a congested state.
+       If pci_bw_inbound_high == pci_bw_inbound_low then the device is not congested.
+       If pci_bw_inbound_high > pci_bw_inbound_low then the device is congested.
+     - Informative
+
+   * - `pci_bw_outbound_high`
+     - The number of times the device crossed the high outbound pcie bandwidth
+       threshold. To be compared to pci_bw_outbound_low to check if the device
+       is in a congested state.
+       If pci_bw_outbound_high == pci_bw_outbound_low then the device is not congested.
+       If pci_bw_outbound_high > pci_bw_outbound_low then the device is congested.
+     - Informative
+
+   * - `pci_bw_outbound_low`
+     - The number of times the device crossed the low outbound PCIe bandwidth
+       threshold. To be compared to pci_bw_outbound_high to check if the device
+       is in a congested state.
+       If pci_bw_outbound_high == pci_bw_outbound_low then the device is not congested.
+       If pci_bw_outbound_high > pci_bw_outbound_low then the device is congested.
+     - Informative
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/pcie_cong_event.c b/drivers/net/ethernet/mellanox/mlx5/core/en/pcie_cong_event.c
index 95a6db9d30b3..a24e5465ceeb 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/pcie_cong_event.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/pcie_cong_event.c
@@ -4,6 +4,13 @@
 #include "en.h"
 #include "pcie_cong_event.h"
 
+#define MLX5E_CONG_HIGH_STATE 0x7
+
+enum {
+	MLX5E_INBOUND_CONG  = BIT(0),
+	MLX5E_OUTBOUND_CONG = BIT(1),
+};
+
 struct mlx5e_pcie_cong_thresh {
 	u16 inbound_high;
 	u16 inbound_low;
@@ -11,10 +18,27 @@ struct mlx5e_pcie_cong_thresh {
 	u16 outbound_low;
 };
 
+struct mlx5e_pcie_cong_stats {
+	u32 pci_bw_inbound_high;
+	u32 pci_bw_inbound_low;
+	u32 pci_bw_outbound_high;
+	u32 pci_bw_outbound_low;
+};
+
 struct mlx5e_pcie_cong_event {
 	u64 obj_id;
 
 	struct mlx5e_priv *priv;
+
+	/* For event notifier and workqueue. */
+	struct work_struct work;
+	struct mlx5_nb nb;
+
+	/* Stores last read state. */
+	u8 state;
+
+	/* For ethtool stats group. */
+	struct mlx5e_pcie_cong_stats stats;
 };
 
 /* In units of 0.01 % */
@@ -25,6 +49,51 @@ static const struct mlx5e_pcie_cong_thresh default_thresh_config = {
 	.outbound_low = 7500,
 };
 
+static const struct counter_desc mlx5e_pcie_cong_stats_desc[] = {
+	{ MLX5E_DECLARE_STAT(struct mlx5e_pcie_cong_stats,
+			     pci_bw_inbound_high) },
+	{ MLX5E_DECLARE_STAT(struct mlx5e_pcie_cong_stats,
+			     pci_bw_inbound_low) },
+	{ MLX5E_DECLARE_STAT(struct mlx5e_pcie_cong_stats,
+			     pci_bw_outbound_high) },
+	{ MLX5E_DECLARE_STAT(struct mlx5e_pcie_cong_stats,
+			     pci_bw_outbound_low) },
+};
+
+#define NUM_PCIE_CONG_COUNTERS ARRAY_SIZE(mlx5e_pcie_cong_stats_desc)
+
+static MLX5E_DECLARE_STATS_GRP_OP_NUM_STATS(pcie_cong)
+{
+	return priv->cong_event ? NUM_PCIE_CONG_COUNTERS : 0;
+}
+
+static MLX5E_DECLARE_STATS_GRP_OP_UPDATE_STATS(pcie_cong) {}
+
+static MLX5E_DECLARE_STATS_GRP_OP_FILL_STRS(pcie_cong)
+{
+	if (!priv->cong_event)
+		return;
+
+	for (int i = 0; i < NUM_PCIE_CONG_COUNTERS; i++)
+		ethtool_puts(data, mlx5e_pcie_cong_stats_desc[i].format);
+}
+
+static MLX5E_DECLARE_STATS_GRP_OP_FILL_STATS(pcie_cong)
+{
+	if (!priv->cong_event)
+		return;
+
+	for (int i = 0; i < NUM_PCIE_CONG_COUNTERS; i++) {
+		u32 ctr = MLX5E_READ_CTR32_CPU(&priv->cong_event->stats,
+					       mlx5e_pcie_cong_stats_desc,
+					       i);
+
+		mlx5e_ethtool_put_stat(data, ctr);
+	}
+}
+
+MLX5E_DEFINE_STATS_GRP(pcie_cong, 0);
+
 static int
 mlx5_cmd_pcie_cong_event_set(struct mlx5_core_dev *dev,
 			     const struct mlx5e_pcie_cong_thresh *config,
@@ -89,6 +158,97 @@ static int mlx5_cmd_pcie_cong_event_destroy(struct mlx5_core_dev *dev,
 	return mlx5_cmd_exec(dev, in, sizeof(in), out, sizeof(out));
 }
 
+static int mlx5_cmd_pcie_cong_event_query(struct mlx5_core_dev *dev,
+					  u64 obj_id,
+					  u32 *state)
+{
+	u32 in[MLX5_ST_SZ_DW(pcie_cong_event_cmd_in)] = {};
+	u32 out[MLX5_ST_SZ_DW(pcie_cong_event_cmd_out)];
+	void *obj;
+	void *hdr;
+	u8 cong;
+	int err;
+
+	hdr = MLX5_ADDR_OF(pcie_cong_event_cmd_in, in, hdr);
+
+	MLX5_SET(general_obj_in_cmd_hdr, hdr, opcode,
+		 MLX5_CMD_OP_QUERY_GENERAL_OBJECT);
+	MLX5_SET(general_obj_in_cmd_hdr, hdr, obj_type,
+		 MLX5_GENERAL_OBJECT_TYPES_PCIE_CONG_EVENT);
+	MLX5_SET(general_obj_in_cmd_hdr, hdr, obj_id, obj_id);
+
+	err = mlx5_cmd_exec(dev, in, sizeof(in), out, sizeof(out));
+	if (err)
+		return err;
+
+	obj = MLX5_ADDR_OF(pcie_cong_event_cmd_out, out, cong_obj);
+
+	if (state) {
+		cong = MLX5_GET(pcie_cong_event_obj, obj, inbound_cong_state);
+		if (cong == MLX5E_CONG_HIGH_STATE)
+			*state |= MLX5E_INBOUND_CONG;
+
+		cong = MLX5_GET(pcie_cong_event_obj, obj, outbound_cong_state);
+		if (cong == MLX5E_CONG_HIGH_STATE)
+			*state |= MLX5E_OUTBOUND_CONG;
+	}
+
+	return 0;
+}
+
+static void mlx5e_pcie_cong_event_work(struct work_struct *work)
+{
+	struct mlx5e_pcie_cong_event *cong_event;
+	struct mlx5_core_dev *dev;
+	struct mlx5e_priv *priv;
+	u32 new_cong_state = 0;
+	u32 changes;
+	int err;
+
+	cong_event = container_of(work, struct mlx5e_pcie_cong_event, work);
+	priv = cong_event->priv;
+	dev = priv->mdev;
+
+	err = mlx5_cmd_pcie_cong_event_query(dev, cong_event->obj_id,
+					     &new_cong_state);
+	if (err) {
+		mlx5_core_warn(dev, "Error %d when querying PCIe cong event object (obj_id=%llu).\n",
+			       err, cong_event->obj_id);
+		return;
+	}
+
+	changes = cong_event->state ^ new_cong_state;
+	if (!changes)
+		return;
+
+	cong_event->state = new_cong_state;
+
+	if (changes & MLX5E_INBOUND_CONG) {
+		if (new_cong_state & MLX5E_INBOUND_CONG)
+			cong_event->stats.pci_bw_inbound_high++;
+		else
+			cong_event->stats.pci_bw_inbound_low++;
+	}
+
+	if (changes & MLX5E_OUTBOUND_CONG) {
+		if (new_cong_state & MLX5E_OUTBOUND_CONG)
+			cong_event->stats.pci_bw_outbound_high++;
+		else
+			cong_event->stats.pci_bw_outbound_low++;
+	}
+}
+
+static int mlx5e_pcie_cong_event_handler(struct notifier_block *nb,
+					 unsigned long event, void *eqe)
+{
+	struct mlx5e_pcie_cong_event *cong_event;
+
+	cong_event = mlx5_nb_cof(nb, struct mlx5e_pcie_cong_event, nb);
+	queue_work(cong_event->priv->wq, &cong_event->work);
+
+	return NOTIFY_OK;
+}
+
 bool mlx5e_pcie_cong_event_supported(struct mlx5_core_dev *dev)
 {
 	u64 features = MLX5_CAP_GEN_2_64(dev, general_obj_types_127_64);
@@ -116,6 +276,10 @@ int mlx5e_pcie_cong_event_init(struct mlx5e_priv *priv)
 	if (!cong_event)
 		return -ENOMEM;
 
+	INIT_WORK(&cong_event->work, mlx5e_pcie_cong_event_work);
+	MLX5_NB_INIT(&cong_event->nb, mlx5e_pcie_cong_event_handler,
+		     OBJECT_CHANGE);
+
 	cong_event->priv = priv;
 
 	err = mlx5_cmd_pcie_cong_event_set(mdev, &default_thresh_config,
@@ -125,10 +289,18 @@ int mlx5e_pcie_cong_event_init(struct mlx5e_priv *priv)
 		goto err_free;
 	}
 
+	err = mlx5_eq_notifier_register(mdev, &cong_event->nb);
+	if (err) {
+		mlx5_core_warn(mdev, "Error registering notifier for the PCIe congestion event\n");
+		goto err_obj_destroy;
+	}
+
 	priv->cong_event = cong_event;
 
 	return 0;
 
+err_obj_destroy:
+	mlx5_cmd_pcie_cong_event_destroy(mdev, cong_event->obj_id);
 err_free:
 	kvfree(cong_event);
 
@@ -145,6 +317,9 @@ void mlx5e_pcie_cong_event_cleanup(struct mlx5e_priv *priv)
 
 	priv->cong_event = NULL;
 
+	mlx5_eq_notifier_unregister(mdev, &cong_event->nb);
+	cancel_work_sync(&cong_event->work);
+
 	if (mlx5_cmd_pcie_cong_event_destroy(mdev, cong_event->obj_id))
 		mlx5_core_warn(mdev, "Error destroying PCIe congestion event (obj_id=%llu)\n",
 			       cong_event->obj_id);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c
index 19664fa7f217..87536f158d07 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c
@@ -2612,6 +2612,7 @@ mlx5e_stats_grp_t mlx5e_nic_stats_grps[] = {
 #ifdef CONFIG_MLX5_MACSEC
 	&MLX5E_STATS_GRP(macsec_hw),
 #endif
+	&MLX5E_STATS_GRP(pcie_cong),
 };
 
 unsigned int mlx5e_nic_stats_grps_num(struct mlx5e_priv *priv)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.h b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.h
index def5dea1463d..72dbcc1928ef 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.h
@@ -535,5 +535,6 @@ extern MLX5E_DECLARE_STATS_GRP(ipsec_hw);
 extern MLX5E_DECLARE_STATS_GRP(ipsec_sw);
 extern MLX5E_DECLARE_STATS_GRP(ptp);
 extern MLX5E_DECLARE_STATS_GRP(macsec_hw);
+extern MLX5E_DECLARE_STATS_GRP(pcie_cong);
 
 #endif /* __MLX5_EN_STATS_H__ */
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eq.c b/drivers/net/ethernet/mellanox/mlx5/core/eq.c
index dfb079e59d85..db54f6d26591 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eq.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eq.c
@@ -21,6 +21,7 @@
 #include "pci_irq.h"
 #include "devlink.h"
 #include "en_accel/ipsec.h"
+#include "en/pcie_cong_event.h"
 
 enum {
 	MLX5_EQE_OWNER_INIT_VAL	= 0x1,
@@ -585,6 +586,9 @@ static void gather_async_events_mask(struct mlx5_core_dev *dev, u64 mask[4])
 		async_event_mask |=
 			(1ull << MLX5_EVENT_TYPE_OBJECT_CHANGE);
 
+	if (mlx5e_pcie_cong_event_supported(dev))
+		async_event_mask |= (1ull << MLX5_EVENT_TYPE_OBJECT_CHANGE);
+
 	mask[0] = async_event_mask;
 
 	if (MLX5_CAP_GEN(dev, event_cap))
-- 
2.31.1
Re: [PATCH net-next V2 2/3] net/mlx5e: Add device PCIe congestion ethtool stats
Posted by Jakub Kicinski 2 months, 3 weeks ago
On Thu, 10 Jul 2025 09:51:31 +0300 Tariq Toukan wrote:
> +   * - `pci_bw_inbound_high`
> +     - The number of times the device crossed the high inbound pcie bandwidth
> +       threshold. To be compared to pci_bw_inbound_low to check if the device
> +       is in a congested state.
> +       If pci_bw_inbound_high == pci_bw_inbound_low then the device is not congested.
> +       If pci_bw_inbound_high > pci_bw_inbound_low then the device is congested.
> +     - Tnformative

The metrics make sense, but utilization has to be averaged over some
period of time to be meaningful. Can you shad any light on what the
measurement period or algorithm is?

> +	changes = cong_event->state ^ new_cong_state;
> +	if (!changes)
> +		return;

no risk of the high / low events coming so quickly we'll miss both?
Should there be a counter for "mis-firing" of that sort?
You'd be surprised how long the scheduling latency for a kernel worker
can be on a busy server :(

> +	cong_event->state = new_cong_state;
> +
> +	if (changes & MLX5E_INBOUND_CONG) {
> +		if (new_cong_state & MLX5E_INBOUND_CONG)
> +			cong_event->stats.pci_bw_inbound_high++;
> +		else
> +			cong_event->stats.pci_bw_inbound_low++;
> +	}
> +
> +	if (changes & MLX5E_OUTBOUND_CONG) {
> +		if (new_cong_state & MLX5E_OUTBOUND_CONG)
> +			cong_event->stats.pci_bw_outbound_high++;
> +		else
> +			cong_event->stats.pci_bw_outbound_low++;
> +	}
Re: [PATCH net-next V2 2/3] net/mlx5e: Add device PCIe congestion ethtool stats
Posted by Dragos Tatulea 2 months, 3 weeks ago
On Fri, Jul 11, 2025 at 04:25:04PM -0700, Jakub Kicinski wrote:
> On Thu, 10 Jul 2025 09:51:31 +0300 Tariq Toukan wrote:
> > +   * - `pci_bw_inbound_high`
> > +     - The number of times the device crossed the high inbound pcie bandwidth
> > +       threshold. To be compared to pci_bw_inbound_low to check if the device
> > +       is in a congested state.
> > +       If pci_bw_inbound_high == pci_bw_inbound_low then the device is not congested.
> > +       If pci_bw_inbound_high > pci_bw_inbound_low then the device is congested.
> > +     - Tnformative
> 
> The metrics make sense, but utilization has to be averaged over some
> period of time to be meaningful. Can you shad any light on what the
> measurement period or algorithm is?
>
The measurement period in FW is 200 ms.

> > +	changes = cong_event->state ^ new_cong_state;
> > +	if (!changes)
> > +		return;
> 
> no risk of the high / low events coming so quickly we'll miss both?
Yes it is possible and it is fine because short bursts are not counted. The
counters are for sustained high PCI BW usage.

> Should there be a counter for "mis-firing" of that sort?
> You'd be surprised how long the scheduling latency for a kernel worker
> can be on a busy server :(
>
The event is just a notification to read the state from FW. If the
read is issued later and the state has not changed then it will not be
considered.

Thanks,
Dragos
Re: [PATCH net-next V2 2/3] net/mlx5e: Add device PCIe congestion ethtool stats
Posted by Jakub Kicinski 2 months, 3 weeks ago
On Sat, 12 Jul 2025 07:55:27 +0000 Dragos Tatulea wrote:
> > The metrics make sense, but utilization has to be averaged over some
> > period of time to be meaningful. Can you shad any light on what the
> > measurement period or algorithm is?
>
> The measurement period in FW is 200 ms.

SG, please include in the doc.
 
> > > +	changes = cong_event->state ^ new_cong_state;
> > > +	if (!changes)
> > > +		return;  
> > 
> > no risk of the high / low events coming so quickly we'll miss both?  
> Yes it is possible and it is fine because short bursts are not counted. The
> counters are for sustained high PCI BW usage.
> 
> > Should there be a counter for "mis-firing" of that sort?
> > You'd be surprised how long the scheduling latency for a kernel worker
> > can be on a busy server :(
> >  
> The event is just a notification to read the state from FW. If the
> read is issued later and the state has not changed then it will not be
> considered.

200ms is within the range of normal scheduler latency on a busy server.
It's not a deal breaker, but I'd personally add a counter for wakeups
which did not result in any state change. Likely recent experience
with constant EEVDF regressions and sched_ext is coloring my judgment.
Re: [PATCH net-next V2 2/3] net/mlx5e: Add device PCIe congestion ethtool stats
Posted by Tariq Toukan 2 months, 3 weeks ago

On 14/07/2025 18:26, Jakub Kicinski wrote:
> On Sat, 12 Jul 2025 07:55:27 +0000 Dragos Tatulea wrote:
>>> The metrics make sense, but utilization has to be averaged over some
>>> period of time to be meaningful. Can you shad any light on what the
>>> measurement period or algorithm is?
>>
>> The measurement period in FW is 200 ms.
> 
> SG, please include in the doc.
>   
>>>> +	changes = cong_event->state ^ new_cong_state;
>>>> +	if (!changes)
>>>> +		return;
>>>
>>> no risk of the high / low events coming so quickly we'll miss both?
>> Yes it is possible and it is fine because short bursts are not counted. The
>> counters are for sustained high PCI BW usage.
>>
>>> Should there be a counter for "mis-firing" of that sort?
>>> You'd be surprised how long the scheduling latency for a kernel worker
>>> can be on a busy server :(
>>>   
>> The event is just a notification to read the state from FW. If the
>> read is issued later and the state has not changed then it will not be
>> considered.
> 
> 200ms is within the range of normal scheduler latency on a busy server.
> It's not a deal breaker, but I'd personally add a counter for wakeups
> which did not result in any state change. Likely recent experience
> with constant EEVDF regressions and sched_ext is coloring my judgment.
> 

NP with that.
We'll add it as a followup patch, after it's implemented and properly 
tested.

Same applies for the requested devlink config (replacing the sysfs).

For now, I'll respin without the configuration part and the extra counter.
Re: [PATCH net-next V2 2/3] net/mlx5e: Add device PCIe congestion ethtool stats
Posted by Jakub Kicinski 2 months, 3 weeks ago
On Tue, 15 Jul 2025 16:59:43 +0300 Tariq Toukan wrote:
> NP with that.
> We'll add it as a followup patch, after it's implemented and properly 
> tested.
> 
> Same applies for the requested devlink config (replacing the sysfs).
> 
> For now, I'll respin without the configuration part and the extra counter.

SG.
Re: [PATCH net-next V2 2/3] net/mlx5e: Add device PCIe congestion ethtool stats
Posted by kernel test robot 2 months, 4 weeks ago
Hi Tariq,

kernel test robot noticed the following build errors:

[auto build test ERROR on c65d34296b2252897e37835d6007bbd01b255742]

url:    https://github.com/intel-lab-lkp/linux/commits/Tariq-Toukan/net-mlx5e-Create-destroy-PCIe-Congestion-Event-object/20250710-145940
base:   c65d34296b2252897e37835d6007bbd01b255742
patch link:    https://lore.kernel.org/r/1752130292-22249-3-git-send-email-tariqt%40nvidia.com
patch subject: [PATCH net-next V2 2/3] net/mlx5e: Add device PCIe congestion ethtool stats
config: powerpc-randconfig-003-20250711 (https://download.01.org/0day-ci/archive/20250711/202507110932.iOkSE74e-lkp@intel.com/config)
compiler: clang version 21.0.0git (https://github.com/llvm/llvm-project 01c97b4953e87ae455bd4c41e3de3f0f0f29c61c)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250711/202507110932.iOkSE74e-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202507110932.iOkSE74e-lkp@intel.com/

All errors (new ones prefixed by >>, old ones prefixed by <<):

>> ERROR: modpost: "mlx5e_pcie_cong_event_supported" [drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.ko] undefined!

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki