From: Cosmin Ratiu <cratiu@nvidia.com>
It seems rates were not documented in the mlx5-specific file, so add
examples on how to limit VFs and groups and also provide an example of
the intended way to achieve cross-esw scheduling.
Signed-off-by: Cosmin Ratiu <cratiu@nvidia.com>
Reviewed-by: Carolina Jubran <cjubran@nvidia.com>
Signed-off-by: Tariq Toukan <tariqt@nvidia.com>
---
Documentation/networking/devlink/mlx5.rst | 33 +++++++++++++++++++++++
1 file changed, 33 insertions(+)
diff --git a/Documentation/networking/devlink/mlx5.rst b/Documentation/networking/devlink/mlx5.rst
index 4bba4d780a4a..62c4d7bf0877 100644
--- a/Documentation/networking/devlink/mlx5.rst
+++ b/Documentation/networking/devlink/mlx5.rst
@@ -419,3 +419,36 @@ User commands examples:
.. note::
This command can run over all interfaces such as PF/VF and representor ports.
+
+Rates
+=====
+
+mlx5 devices can limit transmission of individual VFs or a group of them via
+the devlink-rate API in switchdev mode.
+
+User commands examples:
+
+- Print the existing rates::
+
+ $ devlink port function rate show
+
+- Set a max tx limit on traffic from VF0::
+
+ $ devlink port function rate set pci/0000:82:00.0/1 tx_max 10Gbit
+
+- Create a rate group with a max tx limit and adding two VFs to it::
+
+ $ devlink port function rate add pci/0000:82:00.0/group1 tx_max 10Gbit
+ $ devlink port function rate set pci/0000:82:00.0/1 parent group1
+ $ devlink port function rate set pci/0000:82:00.0/2 parent group1
+
+- Same scenario, with a min guarantee of 20% of the bandwidth for the first VFs::
+
+ $ devlink port function rate add pci/0000:82:00.0/group1 tx_max 10Gbit
+ $ devlink port function rate set pci/0000:82:00.0/1 parent group1 tx_share 2Gbit
+ $ devlink port function rate set pci/0000:82:00.0/2 parent group1
+
+- Cross-device scheduling::
+
+ $ devlink port function rate add pci/0000:82:00.0/group1 tx_max 10Gbit
+ $ devlink port function rate set pci/0000:82:00.1/32769 parent pci/0000:82:00.0/group1
--
2.31.1