From nobody Sun May 10 14:11:33 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1CC5AC433EF for ; Mon, 2 May 2022 10:22:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1384543AbiEBKZx (ORCPT ); Mon, 2 May 2022 06:25:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42238 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1384504AbiEBKZm (ORCPT ); Mon, 2 May 2022 06:25:42 -0400 Received: from NAM04-MW2-obe.outbound.protection.outlook.com (mail-mw2nam08on2086.outbound.protection.outlook.com [40.107.101.86]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E2E03E8B for ; Mon, 2 May 2022 03:22:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ZzkMfgizbMnvQWdLEga4tdnnwJ+5Ox1kjxim+iLAYt6iwEFL68FXr2t/oUROzy5kDKI6DwsF3SWTXPuhdSD/WM83l7CfKI8dGXFoGWHAYZpeTdxFkfAPSBF8dnIk68DdFTV+VDiXAwgb2xQ5JPHiZqVB7JGxpaoz03GR276yejOI98Ncko/cvBymx1cSw/VJ4rf7UN6iYZSdHPwcPuq5XNPwvZVPI9o7oNvV3vZCbvOBakD+l6hNSgFYROpz5DFiw/V7zxhH2JaF8gUV2ANJk8XQbaOrOm8R+DG9/1tentLGtB6jsj9q+Zmw/nuYVbDd/LDgG7+WA3f1jlDn+9PW6Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=LMRsLqdfwjWK6AXhPCfk5p10bJvcBL81nZDtFivhlLI=; b=HYNi/WZX500V3d2BvnkWFk9iXlBRygy+IeKzBPL0/777H6fXtINkUYUW83LW2+Xjwe3mswFFh/7lBRLj670V4wMdHFnZ+uJ+M9dp3t+MI6WeWZxaLUoQdNok2hSkp0cj+ua0IeGrWh25dRyLVi54hGE/bsSH09bWESs3BAFYWOykjg05vDdOoyAPQ/8Pj3wYSj4/NUW7g57RKEYN9XwvgIUECRVweWz0+83+AdVQDCdicqA2fbmmmLXY1MKLk16kgs+jcpMJXmKtKLiZLo/T0+2rZiYJEg4LWOXpNySdeZXb4+miTNQsppgM/Hvo1IaHf2oOi2O2X2kB1dTcVlDsNw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.236) smtp.rcpttodomain=vger.kernel.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=LMRsLqdfwjWK6AXhPCfk5p10bJvcBL81nZDtFivhlLI=; b=F4mPj6dW9VCw0THOcer2tmceu65udp+coBy2f+cptbTQ/t4Vue5shoUe8WXAiI0AVVmmp2W+5vG4hW+CMqWJIbVUbH42Yf8tKI/XGN0AHQ0Rg18GiPkLAGBTFFP9qQcvyAGIf37joJe8rA7+pRYarggCfKzt2XNkJpseCG8jdV/0CCVUTkmFgNKQl5y0OVXv25ezY/s3xZawJV7A9UtbccMD+vet8Mg3ROsyqLgOpniKowZrDnJCyW2tk5M0ChpDHf39uBTfDFcccoTLXdWkQxDrTFb11Sr3cbwQsGEHqfFrWz3itVjeQtg9RzctKigcrgSlWD3d55+eDHjT86shEw== Received: from DM6PR02CA0048.namprd02.prod.outlook.com (2603:10b6:5:177::25) by DM5PR12MB2568.namprd12.prod.outlook.com (2603:10b6:4:b5::35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5206.13; Mon, 2 May 2022 10:22:12 +0000 Received: from DM6NAM11FT057.eop-nam11.prod.protection.outlook.com (2603:10b6:5:177:cafe::8d) by DM6PR02CA0048.outlook.office365.com (2603:10b6:5:177::25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5206.12 via Frontend Transport; Mon, 2 May 2022 10:22:12 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.236) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.236 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.236; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.236) by DM6NAM11FT057.mail.protection.outlook.com (10.13.172.252) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5206.12 via Frontend Transport; Mon, 2 May 2022 10:22:12 +0000 Received: from rnnvmail204.nvidia.com (10.129.68.6) by DRHQMAIL109.nvidia.com (10.27.9.19) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Mon, 2 May 2022 10:22:11 +0000 Received: from rnnvmail204.nvidia.com (10.129.68.6) by rnnvmail204.nvidia.com (10.129.68.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Mon, 2 May 2022 03:22:10 -0700 Received: from vdi.nvidia.com (10.127.8.13) by mail.nvidia.com (10.129.68.6) with Microsoft SMTP Server id 15.2.986.22 via Frontend Transport; Mon, 2 May 2022 03:22:09 -0700 From: Eli Cohen To: , CC: , , , Eli Cohen Subject: [PATCH v3 1/2] vdpa: Add support for querying vendor statistics Date: Mon, 2 May 2022 13:22:00 +0300 Message-ID: <20220502102201.190357-2-elic@nvidia.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220502102201.190357-1-elic@nvidia.com> References: <20220502102201.190357-1-elic@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 81b73623-a294-49dd-56d7-08da2c259f64 X-MS-TrafficTypeDiagnostic: DM5PR12MB2568:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: tGv5x6NRhioqxf/0QKcSkFVtrTxy9qhDMNsTMEbm1ZhoH7fPxuHXMe0NaFgsAJDfJcfW95JQj1Jv8HA0t2naK205k3QG7ra4ib0wq7gU+IOaFvz88NzZN2i9M/tEAYY8bIgiMx0to3sFnasdgRlWj3AvHyO/5QHbW5ucxq+//ix812w+1HqEmK9OP0YroJdDG55eJm39rsdx9EAb9YxV5doRWIuIae7NzeP5bTc2NLwFsm5N4o+4XxVnfeCSEpe5irib9YSV4bPOddhU0bG7wcvuDniKUwpdx9EbJgJXD4oZCFprEemSPUrs0iLqEI3t9DtmxXvOpSUdUzYHd7FVTwnyoWJn7OgBv3i4eiJPJguZ6LABJRuFEHyWkEFdu94QcvqNb2KJPefrYgnNSxTnS3kWHGBpAUheziVsQy1Ann+lKdJ514UhAuW1Ev6bd1MJlZoAxJjHC0xf1okUZ/vr3DNuicDSxuQHBQwJKEscw9wSFWCvMO7JO/QIAg2deQc/gDrsAXl0qCiHwAmQFxr22d8Aimtb6jqKqySd5jgOiOY5V6qNptquyVp7YF2JpdlXD8QCWnseNnzkeOgtUFvL3OkgOjBTkZnIJ4oyO9GU/Whi90aX7W5E13fP0ReQFwZdPpGS5dzQD7tDgLEFQ43OY6JsfYgKOhRvOYmhKhOmZ0t9zZ3kKA+Pw0CgZYj9iT2icTL7aKRvr3NgYcTtL4cw/Q== X-Forefront-Antispam-Report: CIP:12.22.5.236;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:InfoNoRecords;CAT:NONE;SFS:(13230001)(4636009)(36840700001)(46966006)(40470700004)(336012)(83380400001)(356005)(426003)(8676002)(40460700003)(47076005)(1076003)(186003)(36756003)(8936002)(4326008)(5660300002)(2906002)(107886003)(82310400005)(54906003)(110136005)(7696005)(6666004)(2616005)(70586007)(70206006)(26005)(36860700001)(81166007)(508600001)(86362001)(316002)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2022 10:22:12.4661 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 81b73623-a294-49dd-56d7-08da2c259f64 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[12.22.5.236];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT057.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR12MB2568 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Allows to read vendor statistics of a vdpa device. The specific statistics data are received from the upstream driver in the form of an (attribute name, attribute value) pairs. An example of statistics for mlx5_vdpa device are: received_desc - number of descriptors received by the virtqueue completed_desc - number of descriptors completed by the virtqueue A descriptor using indirect buffers is still counted as 1. In addition, N chained descriptors are counted correctly N times as one would expect. A new callback was added to vdpa_config_ops which provides the means for the vdpa driver to return statistics results. The interface allows for reading all the supported virtqueues, including the control virtqueue if it exists. Below are some examples taken from mlx5_vdpa which are introduced in the following patch: 1. Read statistics for the virtqueue at index 1 $ vdpa dev vstats show vdpa-a qidx 1 vdpa-a: queue_type tx queue_index 1 received_desc 3844836 completed_desc 3844836 2. Read statistics for the virtqueue at index 32 $ vdpa dev vstats show vdpa-a qidx 32 vdpa-a: queue_type control_vq queue_index 32 received_desc 62 completed_desc 62 3. Read statisitics for the virtqueue at index 0 with json output $ vdpa -j dev vstats show vdpa-a qidx 0 {"vstats":{"vdpa-a":{ "queue_type":"rx","queue_index":0,"name":"received_desc","value":417776,\ "name":"completed_desc","value":417548}}} 4. Read statistics for the virtqueue at index 0 with preety json output $ vdpa -jp dev vstats show vdpa-a qidx 0 { "vstats": { "vdpa-a": { "queue_type": "rx", "queue_index": 0, "name": "received_desc", "value": 417776, "name": "completed_desc", "value": 417548 } } } Signed-off-by: Eli Cohen Acked-by: Si-Wei Liu --- drivers/vdpa/vdpa.c | 129 ++++++++++++++++++++++++++++++++++++++ include/linux/vdpa.h | 5 ++ include/uapi/linux/vdpa.h | 6 ++ 3 files changed, 140 insertions(+) diff --git a/drivers/vdpa/vdpa.c b/drivers/vdpa/vdpa.c index 2b75c00b1005..933466f61ca8 100644 --- a/drivers/vdpa/vdpa.c +++ b/drivers/vdpa/vdpa.c @@ -909,6 +909,74 @@ vdpa_dev_config_fill(struct vdpa_device *vdev, struct = sk_buff *msg, u32 portid, return err; } =20 +static int vdpa_fill_stats_rec(struct vdpa_device *vdev, struct sk_buff *m= sg, + struct genl_info *info, u32 index) +{ + int err; + + err =3D vdev->config->get_vendor_vq_stats(vdev, index, msg, info->extack); + if (err) + return err; + + if (nla_put_u32(msg, VDPA_ATTR_DEV_QUEUE_INDEX, index)) + return -EMSGSIZE; + + return 0; +} + +static int vendor_stats_fill(struct vdpa_device *vdev, struct sk_buff *msg, + struct genl_info *info, u32 index) +{ + int err; + + if (!vdev->config->get_vendor_vq_stats) + return -EOPNOTSUPP; + + err =3D vdpa_fill_stats_rec(vdev, msg, info, index); + if (err) + return err; + + return 0; +} + +static int vdpa_dev_vendor_stats_fill(struct vdpa_device *vdev, + struct sk_buff *msg, + struct genl_info *info, u32 index) +{ + u32 device_id; + void *hdr; + int err; + u32 portid =3D info->snd_portid; + u32 seq =3D info->snd_seq; + u32 flags =3D 0; + + hdr =3D genlmsg_put(msg, portid, seq, &vdpa_nl_family, flags, + VDPA_CMD_DEV_VSTATS_GET); + if (!hdr) + return -EMSGSIZE; + + if (nla_put_string(msg, VDPA_ATTR_DEV_NAME, dev_name(&vdev->dev))) { + err =3D -EMSGSIZE; + goto undo_msg; + } + + device_id =3D vdev->config->get_device_id(vdev); + if (nla_put_u32(msg, VDPA_ATTR_DEV_ID, device_id)) { + err =3D -EMSGSIZE; + goto undo_msg; + } + + err =3D vendor_stats_fill(vdev, msg, info, index); + + genlmsg_end(msg, hdr); + + return err; + +undo_msg: + genlmsg_cancel(msg, hdr); + return err; +} + static int vdpa_nl_cmd_dev_config_get_doit(struct sk_buff *skb, struct gen= l_info *info) { struct vdpa_device *vdev; @@ -990,6 +1058,60 @@ vdpa_nl_cmd_dev_config_get_dumpit(struct sk_buff *msg= , struct netlink_callback * return msg->len; } =20 +static int vdpa_nl_cmd_dev_stats_get_doit(struct sk_buff *skb, + struct genl_info *info) +{ + struct vdpa_device *vdev; + struct sk_buff *msg; + const char *devname; + struct device *dev; + u32 index; + int err; + + if (!info->attrs[VDPA_ATTR_DEV_NAME]) + return -EINVAL; + + if (!info->attrs[VDPA_ATTR_DEV_QUEUE_INDEX]) + return -EINVAL; + + devname =3D nla_data(info->attrs[VDPA_ATTR_DEV_NAME]); + msg =3D nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL); + if (!msg) + return -ENOMEM; + + index =3D nla_get_u32(info->attrs[VDPA_ATTR_DEV_QUEUE_INDEX]); + mutex_lock(&vdpa_dev_mutex); + dev =3D bus_find_device(&vdpa_bus, NULL, devname, vdpa_name_match); + if (!dev) { + NL_SET_ERR_MSG_MOD(info->extack, "device not found"); + err =3D -ENODEV; + goto dev_err; + } + vdev =3D container_of(dev, struct vdpa_device, dev); + if (!vdev->mdev) { + NL_SET_ERR_MSG_MOD(info->extack, "unmanaged vdpa device"); + err =3D -EINVAL; + goto mdev_err; + } + err =3D vdpa_dev_vendor_stats_fill(vdev, msg, info, index); + if (!err) + err =3D genlmsg_reply(msg, info); + + put_device(dev); + mutex_unlock(&vdpa_dev_mutex); + + if (err) + nlmsg_free(msg); + + return err; + +mdev_err: + put_device(dev); +dev_err: + mutex_unlock(&vdpa_dev_mutex); + return err; +} + static const struct nla_policy vdpa_nl_policy[VDPA_ATTR_MAX + 1] =3D { [VDPA_ATTR_MGMTDEV_BUS_NAME] =3D { .type =3D NLA_NUL_STRING }, [VDPA_ATTR_MGMTDEV_DEV_NAME] =3D { .type =3D NLA_STRING }, @@ -997,6 +1119,7 @@ static const struct nla_policy vdpa_nl_policy[VDPA_ATT= R_MAX + 1] =3D { [VDPA_ATTR_DEV_NET_CFG_MACADDR] =3D NLA_POLICY_ETH_ADDR, /* virtio spec 1.1 section 5.1.4.1 for valid MTU range */ [VDPA_ATTR_DEV_NET_CFG_MTU] =3D NLA_POLICY_MIN(NLA_U16, 68), + [VDPA_ATTR_DEV_QUEUE_INDEX] =3D NLA_POLICY_RANGE(NLA_U32, 0, 65535), }; =20 static const struct genl_ops vdpa_nl_ops[] =3D { @@ -1030,6 +1153,12 @@ static const struct genl_ops vdpa_nl_ops[] =3D { .doit =3D vdpa_nl_cmd_dev_config_get_doit, .dumpit =3D vdpa_nl_cmd_dev_config_get_dumpit, }, + { + .cmd =3D VDPA_CMD_DEV_VSTATS_GET, + .validate =3D GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP, + .doit =3D vdpa_nl_cmd_dev_stats_get_doit, + .flags =3D GENL_ADMIN_PERM, + }, }; =20 static struct genl_family vdpa_nl_family __ro_after_init =3D { diff --git a/include/linux/vdpa.h b/include/linux/vdpa.h index 8943a209202e..48ed1fc00830 100644 --- a/include/linux/vdpa.h +++ b/include/linux/vdpa.h @@ -276,6 +276,9 @@ struct vdpa_config_ops { const struct vdpa_vq_state *state); int (*get_vq_state)(struct vdpa_device *vdev, u16 idx, struct vdpa_vq_state *state); + int (*get_vendor_vq_stats)(struct vdpa_device *vdev, u16 idx, + struct sk_buff *msg, + struct netlink_ext_ack *extack); struct vdpa_notification_area (*get_vq_notification)(struct vdpa_device *vdev, u16 idx); /* vq irq is not expected to be changed once DRIVER_OK is set */ @@ -473,4 +476,6 @@ struct vdpa_mgmt_dev { int vdpa_mgmtdev_register(struct vdpa_mgmt_dev *mdev); void vdpa_mgmtdev_unregister(struct vdpa_mgmt_dev *mdev); =20 +#define VDPA_INVAL_QUEUE_INDEX 0xffff + #endif /* _LINUX_VDPA_H */ diff --git a/include/uapi/linux/vdpa.h b/include/uapi/linux/vdpa.h index 1061d8d2d09d..25c55cab3d7c 100644 --- a/include/uapi/linux/vdpa.h +++ b/include/uapi/linux/vdpa.h @@ -18,6 +18,7 @@ enum vdpa_command { VDPA_CMD_DEV_DEL, VDPA_CMD_DEV_GET, /* can dump */ VDPA_CMD_DEV_CONFIG_GET, /* can dump */ + VDPA_CMD_DEV_VSTATS_GET, }; =20 enum vdpa_attr { @@ -46,6 +47,11 @@ enum vdpa_attr { VDPA_ATTR_DEV_NEGOTIATED_FEATURES, /* u64 */ VDPA_ATTR_DEV_MGMTDEV_MAX_VQS, /* u32 */ VDPA_ATTR_DEV_SUPPORTED_FEATURES, /* u64 */ + + VDPA_ATTR_DEV_QUEUE_INDEX, /* u32 */ + VDPA_ATTR_DEV_VENDOR_ATTR_NAME, /* string */ + VDPA_ATTR_DEV_VENDOR_ATTR_VALUE, /* u64 */ + /* new attributes must be added above here */ VDPA_ATTR_MAX, }; --=20 2.35.1 From nobody Sun May 10 14:11:33 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1FA47C433F5 for ; Mon, 2 May 2022 10:22:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1384502AbiEBKZ4 (ORCPT ); Mon, 2 May 2022 06:25:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42238 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238667AbiEBKZq (ORCPT ); Mon, 2 May 2022 06:25:46 -0400 Received: from NAM02-DM3-obe.outbound.protection.outlook.com (mail-dm3nam07on2064.outbound.protection.outlook.com [40.107.95.64]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C424D1402C for ; Mon, 2 May 2022 03:22:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=dw/f2hBO21j2yWnCW8VsEnKq18fmADRL385MTMYnm4gFfrV5jPEDZdQUEji0IGuk955s0/R7bAVoKz27QxS6RPpPh+9jyRa9EHf5zdW8M8DUZnNTqVwQ4wrYsLudHOljTA+FEShpM4cYsUIPtJcEJa1C028Pj94bQwSaT3AehCsn6ZbS5VHvAwKsECrchVudK6Cj2odzrRp9M+M7s3jtCzeji5kvvqynZmFg2T/jikZ9luMUCxfml7Rr1030WO4XvpkTTgX4zMXFGLc0tRIuxpbgIrOt1fs1HN2veJjtFefN26MpHNjzYcUvx+nQBjCGeCbI36Gbzbx6LxuRDAgAKg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=/TsqM7+nPJaMtsaz8mCEHjg6eSNbnWpYynfYItDJIHs=; b=gHHvbZSvyYxMMMlJnSXkvNmbEpp4rZyjwqELzxLKo9jhqHSh9SikN34WPmrI0Ur4+UcLMDvVpCF6DCQ+eD/Ax5w1y8XZ3y9rCT8pyhuxYY1CMjKxqfcjXN3UXWU61BDk9NdSe+BuCu+nKXVBES6Ng1+R0+GJ+XC+nbbrRxRU2PYVAVjiOpYcFZIVpXInBlslZb810sIREN1wVy898y7KMUxlMplwlf/eNIEWLBI8gQMFFc3qFOBH22ZtQac0+BM2HJtE8Nle24DSJJsLk7NsCyjZWgovvItN8jxG4oHEob3lm+ilj7Xd91qEFNTLRrVXlfWOmQ5w1GBb3Gr70j8oqw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.236) smtp.rcpttodomain=vger.kernel.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=/TsqM7+nPJaMtsaz8mCEHjg6eSNbnWpYynfYItDJIHs=; b=kK8sP4mL0bD9dvh26VxFrh+YmHNtqxHmuJT3pZNquTaQdHAem3BsxIJ6qj4qeRlK4Y+sBAyA9J1YMcr79YTOPbSTDsF5+ShfBihOgU89/+lnjx8eDoSaGFOXRPiDJUoe5EE103laMTVUoV3LuaiVEahgAu8i1tJcgvaiCtBYyeF25BdeWHFLRLkiUoeiGb4RcNOW78JoJGRl9Z5KbBkETwTTh+7YlxP24mwx6015DbwBf1xToWoF8/cqRtjCbKFK9SNixaud5QD1syAk8XgrMFXsjqUtVtep2DF+e3qSZ9pdSIsTtkKI98afCcZKsycvw6EcQ4x4GO3xfjPYKUBK2Q== Received: from DS7PR06CA0042.namprd06.prod.outlook.com (2603:10b6:8:54::23) by MN2PR12MB4551.namprd12.prod.outlook.com (2603:10b6:208:263::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5186.14; Mon, 2 May 2022 10:22:14 +0000 Received: from DM6NAM11FT063.eop-nam11.prod.protection.outlook.com (2603:10b6:8:54:cafe::db) by DS7PR06CA0042.outlook.office365.com (2603:10b6:8:54::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5206.13 via Frontend Transport; Mon, 2 May 2022 10:22:14 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.236) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.236 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.236; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.236) by DM6NAM11FT063.mail.protection.outlook.com (10.13.172.219) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5206.12 via Frontend Transport; Mon, 2 May 2022 10:22:14 +0000 Received: from rnnvmail205.nvidia.com (10.129.68.10) by DRHQMAIL109.nvidia.com (10.27.9.19) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Mon, 2 May 2022 10:22:13 +0000 Received: from rnnvmail204.nvidia.com (10.129.68.6) by rnnvmail205.nvidia.com (10.129.68.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Mon, 2 May 2022 03:22:13 -0700 Received: from vdi.nvidia.com (10.127.8.13) by mail.nvidia.com (10.129.68.6) with Microsoft SMTP Server id 15.2.986.22 via Frontend Transport; Mon, 2 May 2022 03:22:11 -0700 From: Eli Cohen To: , CC: , , , Eli Cohen Subject: [PATCH v3 2/2] vdpa/mlx5: Add support for reading descriptor statistics Date: Mon, 2 May 2022 13:22:01 +0300 Message-ID: <20220502102201.190357-3-elic@nvidia.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220502102201.190357-1-elic@nvidia.com> References: <20220502102201.190357-1-elic@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 4d7b533c-2670-48f3-81e5-08da2c25a08e X-MS-TrafficTypeDiagnostic: MN2PR12MB4551:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: Eo9al+0TNN7r5W0PebVC70oHdY4loJAxZ8vsYUP/lklJ2oOzH9U53wJ6Oktqxw5Gm7020C+27vHyggArZHpe/Cp4+nARrLa/fprYd3H/OoaJZbjTTCn3itgR1sPgeLBe/cGkcW0+ANiaKZhcuLSXCpkFqT7ndmOMEjmhJQ3X/XAttdnihPbHXF5+pzhkuzO2LFYNhxGaKO+k41QMKezAyp3Oq/KJTI2e2GopFnKUyabxqL7St+OR67Psd5D5+s59iUtvKVUhzVVUBC7h/nUHS32xKHKYO2YKrs0bZWHm/NyIl9ezmunkxvGGM6nkk+miDqrU5V8PQq8m1Xo4jdbijOVEHkB33tCdkNvoqka8qIJJ2ugpw6xCukcLCd0nGj2unX3iD/W3ag+DdFf+etbqglJri2hS7oKW/1Lpj5VFCJIwfnixqG5tHD7Xyodz1MAlcBYvLXXE7GY/5VzZB3g0hS00dzFENtYPa8qEaTlysq41Ch8AXkgfVWsUnl1jjb5X2hcBbab6Ficge7HGtfOIJxc3DTAEYk6lENVqPTQ6dvy/hig10IIzX9wrah15cvbomWwfMAWvNgVnYMWTAhPBi+W/G5kCO07LY+ZYX/I1M5O7qYPFm7X8ico0CVih5cAlKNTkJeRWRNXZrIwxmrqdUAvn9AkckO761bfVVi1VkSvjMbh0pyg2n7rvvHzwV1qFq1QLR5qCXtcSlmDJ+ryEUA== X-Forefront-Antispam-Report: CIP:12.22.5.236;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:InfoNoRecords;CAT:NONE;SFS:(13230001)(4636009)(36840700001)(40470700004)(46966006)(5660300002)(8936002)(336012)(30864003)(426003)(82310400005)(86362001)(508600001)(2616005)(107886003)(26005)(2906002)(6666004)(186003)(1076003)(36860700001)(356005)(7696005)(4326008)(8676002)(81166007)(70206006)(70586007)(54906003)(110136005)(316002)(47076005)(36756003)(40460700003)(83380400001)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 May 2022 10:22:14.3733 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 4d7b533c-2670-48f3-81e5-08da2c25a08e X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[12.22.5.236];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT063.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB4551 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Implement the get_vq_stats calback of vdpa_config_ops to return the statistics for a virtqueue. The statistics are provided as vendor specific statistics where the driver provides a pair of attribute name and attribute value. In addition to the attribute name/attribute value pair, the driver returns the negotiated features and max virtqueue pairs for userspace can decide for a given queue index whether it is a data or control virtqueue. Currently supported are received descriptors and completed descriptors. Acked-by: Jason Wang Signed-off-by: Eli Cohen v2 -> v3: Fix sparse warning Reviewed-by: Si-Wei Liu --- drivers/vdpa/mlx5/core/mlx5_vdpa.h | 2 + drivers/vdpa/mlx5/net/mlx5_vnet.c | 165 +++++++++++++++++++++++++++++ include/linux/mlx5/mlx5_ifc.h | 1 + include/linux/mlx5/mlx5_ifc_vdpa.h | 39 +++++++ 4 files changed, 207 insertions(+) diff --git a/drivers/vdpa/mlx5/core/mlx5_vdpa.h b/drivers/vdpa/mlx5/core/ml= x5_vdpa.h index daaf7b503677..44104093163b 100644 --- a/drivers/vdpa/mlx5/core/mlx5_vdpa.h +++ b/drivers/vdpa/mlx5/core/mlx5_vdpa.h @@ -61,6 +61,8 @@ struct mlx5_control_vq { struct vringh_kiov riov; struct vringh_kiov wiov; unsigned short head; + unsigned int received_desc; + unsigned int completed_desc; }; =20 struct mlx5_vdpa_wq_ent { diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5= _vnet.c index 79001301b383..473ec481813c 100644 --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c @@ -119,6 +119,7 @@ struct mlx5_vdpa_virtqueue { struct mlx5_vdpa_umem umem2; struct mlx5_vdpa_umem umem3; =20 + u32 counter_set_id; bool initialized; int index; u32 virtq_id; @@ -164,6 +165,8 @@ struct mlx5_vdpa_net { struct notifier_block nb; struct vdpa_callback config_cb; struct mlx5_vdpa_wq_ent cvq_ent; + /* sync access to virtqueues statistics */ + struct mutex numq_lock; }; =20 static void free_resources(struct mlx5_vdpa_net *ndev); @@ -822,6 +825,12 @@ static u16 get_features_12_3(u64 features) (!!(features & BIT_ULL(VIRTIO_NET_F_GUEST_CSUM)) << 6); } =20 +static bool counters_supported(const struct mlx5_vdpa_dev *mvdev) +{ + return MLX5_CAP_GEN_64(mvdev->mdev, general_obj_types) & + BIT_ULL(MLX5_OBJ_TYPE_VIRTIO_Q_COUNTERS); +} + static int create_virtqueue(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_v= irtqueue *mvq) { int inlen =3D MLX5_ST_SZ_BYTES(create_virtio_net_q_in); @@ -876,6 +885,8 @@ static int create_virtqueue(struct mlx5_vdpa_net *ndev,= struct mlx5_vdpa_virtque MLX5_SET(virtio_q, vq_ctx, umem_3_id, mvq->umem3.id); MLX5_SET(virtio_q, vq_ctx, umem_3_size, mvq->umem3.size); MLX5_SET(virtio_q, vq_ctx, pd, ndev->mvdev.res.pdn); + if (counters_supported(&ndev->mvdev)) + MLX5_SET(virtio_q, vq_ctx, counter_set_id, mvq->counter_set_id); =20 err =3D mlx5_cmd_exec(ndev->mvdev.mdev, in, inlen, out, sizeof(out)); if (err) @@ -1139,6 +1150,47 @@ static int modify_virtqueue(struct mlx5_vdpa_net *nd= ev, struct mlx5_vdpa_virtque return err; } =20 +static int counter_set_alloc(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_= virtqueue *mvq) +{ + u32 in[MLX5_ST_SZ_DW(create_virtio_q_counters_in)] =3D {}; + u32 out[MLX5_ST_SZ_DW(create_virtio_q_counters_out)] =3D {}; + void *cmd_hdr; + int err; + + if (!counters_supported(&ndev->mvdev)) + return 0; + + cmd_hdr =3D MLX5_ADDR_OF(create_virtio_q_counters_in, in, hdr); + + MLX5_SET(general_obj_in_cmd_hdr, cmd_hdr, opcode, MLX5_CMD_OP_CREATE_GENE= RAL_OBJECT); + MLX5_SET(general_obj_in_cmd_hdr, cmd_hdr, obj_type, MLX5_OBJ_TYPE_VIRTIO_= Q_COUNTERS); + MLX5_SET(general_obj_in_cmd_hdr, cmd_hdr, uid, ndev->mvdev.res.uid); + + err =3D mlx5_cmd_exec(ndev->mvdev.mdev, in, sizeof(in), out, sizeof(out)); + if (err) + return err; + + mvq->counter_set_id =3D MLX5_GET(general_obj_out_cmd_hdr, out, obj_id); + + return 0; +} + +static void counter_set_dealloc(struct mlx5_vdpa_net *ndev, struct mlx5_vd= pa_virtqueue *mvq) +{ + u32 in[MLX5_ST_SZ_DW(destroy_virtio_q_counters_in)] =3D {}; + u32 out[MLX5_ST_SZ_DW(destroy_virtio_q_counters_out)] =3D {}; + + if (!counters_supported(&ndev->mvdev)) + return; + + MLX5_SET(destroy_virtio_q_counters_in, in, hdr.opcode, MLX5_CMD_OP_DESTRO= Y_GENERAL_OBJECT); + MLX5_SET(destroy_virtio_q_counters_in, in, hdr.obj_id, mvq->counter_set_i= d); + MLX5_SET(destroy_virtio_q_counters_in, in, hdr.uid, ndev->mvdev.res.uid); + MLX5_SET(destroy_virtio_q_counters_in, in, hdr.obj_type, MLX5_OBJ_TYPE_VI= RTIO_Q_COUNTERS); + if (mlx5_cmd_exec(ndev->mvdev.mdev, in, sizeof(in), out, sizeof(out))) + mlx5_vdpa_warn(&ndev->mvdev, "dealloc counter set 0x%x\n", mvq->counter_= set_id); +} + static int setup_vq(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtqueue= *mvq) { u16 idx =3D mvq->index; @@ -1166,6 +1218,10 @@ static int setup_vq(struct mlx5_vdpa_net *ndev, stru= ct mlx5_vdpa_virtqueue *mvq) if (err) goto err_connect; =20 + err =3D counter_set_alloc(ndev, mvq); + if (err) + goto err_counter; + err =3D create_virtqueue(ndev, mvq); if (err) goto err_connect; @@ -1183,6 +1239,8 @@ static int setup_vq(struct mlx5_vdpa_net *ndev, struc= t mlx5_vdpa_virtqueue *mvq) return 0; =20 err_connect: + counter_set_dealloc(ndev, mvq); +err_counter: qp_destroy(ndev, &mvq->vqqp); err_vqqp: qp_destroy(ndev, &mvq->fwqp); @@ -1227,6 +1285,7 @@ static void teardown_vq(struct mlx5_vdpa_net *ndev, s= truct mlx5_vdpa_virtqueue * =20 suspend_vq(ndev, mvq); destroy_virtqueue(ndev, mvq); + counter_set_dealloc(ndev, mvq); qp_destroy(ndev, &mvq->vqqp); qp_destroy(ndev, &mvq->fwqp); cq_destroy(ndev, mvq->index); @@ -1633,8 +1692,10 @@ static virtio_net_ctrl_ack handle_ctrl_mq(struct mlx= 5_vdpa_dev *mvdev, u8 cmd) break; } =20 + mutex_lock(&ndev->numq_lock); if (!change_num_qps(mvdev, newqps)) status =3D VIRTIO_NET_OK; + mutex_unlock(&ndev->numq_lock); =20 break; default: @@ -1681,6 +1742,7 @@ static void mlx5_cvq_kick_handler(struct work_struct = *work) if (read !=3D sizeof(ctrl)) break; =20 + cvq->received_desc++; switch (ctrl.class) { case VIRTIO_NET_CTRL_MAC: status =3D handle_ctrl_mac(mvdev, ctrl.cmd); @@ -1704,6 +1766,7 @@ static void mlx5_cvq_kick_handler(struct work_struct = *work) if (vringh_need_notify_iotlb(&cvq->vring)) vringh_notify(&cvq->vring); =20 + cvq->completed_desc++; queue_work(mvdev->wq, &wqent->work); break; } @@ -2323,6 +2386,8 @@ static int mlx5_vdpa_reset(struct vdpa_device *vdev) mlx5_vdpa_destroy_mr(&ndev->mvdev); ndev->mvdev.status =3D 0; ndev->cur_num_vqs =3D 0; + ndev->mvdev.cvq.received_desc =3D 0; + ndev->mvdev.cvq.completed_desc =3D 0; memset(ndev->event_cbs, 0, sizeof(*ndev->event_cbs) * (mvdev->max_vqs + 1= )); ndev->mvdev.actual_features =3D 0; ++mvdev->generation; @@ -2401,6 +2466,7 @@ static void mlx5_vdpa_free(struct vdpa_device *vdev) mlx5_mpfs_del_mac(pfmdev, ndev->config.mac); } mlx5_vdpa_free_resources(&ndev->mvdev); + mutex_destroy(&ndev->numq_lock); mutex_destroy(&ndev->reslock); kfree(ndev->event_cbs); kfree(ndev->vqs); @@ -2442,6 +2508,102 @@ static u64 mlx5_vdpa_get_driver_features(struct vdp= a_device *vdev) return mvdev->actual_features; } =20 +static int counter_set_query(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_= virtqueue *mvq, + u64 *received_desc, u64 *completed_desc) +{ + u32 in[MLX5_ST_SZ_DW(query_virtio_q_counters_in)] =3D {}; + u32 out[MLX5_ST_SZ_DW(query_virtio_q_counters_out)] =3D {}; + void *cmd_hdr; + void *ctx; + int err; + + if (!counters_supported(&ndev->mvdev)) + return -EOPNOTSUPP; + + if (mvq->fw_state !=3D MLX5_VIRTIO_NET_Q_OBJECT_STATE_RDY) + return -EAGAIN; + + cmd_hdr =3D MLX5_ADDR_OF(query_virtio_q_counters_in, in, hdr); + + MLX5_SET(general_obj_in_cmd_hdr, cmd_hdr, opcode, MLX5_CMD_OP_QUERY_GENER= AL_OBJECT); + MLX5_SET(general_obj_in_cmd_hdr, cmd_hdr, obj_type, MLX5_OBJ_TYPE_VIRTIO_= Q_COUNTERS); + MLX5_SET(general_obj_in_cmd_hdr, cmd_hdr, uid, ndev->mvdev.res.uid); + MLX5_SET(general_obj_in_cmd_hdr, cmd_hdr, obj_id, mvq->counter_set_id); + + err =3D mlx5_cmd_exec(ndev->mvdev.mdev, in, sizeof(in), out, sizeof(out)); + if (err) + return err; + + ctx =3D MLX5_ADDR_OF(query_virtio_q_counters_out, out, counters); + *received_desc =3D MLX5_GET64(virtio_q_counters, ctx, received_desc); + *completed_desc =3D MLX5_GET64(virtio_q_counters, ctx, completed_desc); + return 0; +} + +static int mlx5_vdpa_get_vendor_vq_stats(struct vdpa_device *vdev, u16 idx, + struct sk_buff *msg, + struct netlink_ext_ack *extack) +{ + struct mlx5_vdpa_dev *mvdev =3D to_mvdev(vdev); + struct mlx5_vdpa_net *ndev =3D to_mlx5_vdpa_ndev(mvdev); + struct mlx5_vdpa_virtqueue *mvq; + struct mlx5_control_vq *cvq; + u64 received_desc; + u64 completed_desc; + int err =3D 0; + u16 max_vqp; + + mutex_lock(&ndev->numq_lock); + if (!is_index_valid(mvdev, idx)) { + NL_SET_ERR_MSG_MOD(extack, "virtqueue index is not valid"); + err =3D -EINVAL; + goto out_err; + } + + if (idx =3D=3D ctrl_vq_idx(mvdev)) { + cvq =3D &mvdev->cvq; + received_desc =3D cvq->received_desc; + completed_desc =3D cvq->completed_desc; + goto out; + } + + mvq =3D &ndev->vqs[idx]; + err =3D counter_set_query(ndev, mvq, &received_desc, &completed_desc); + if (err) { + NL_SET_ERR_MSG_MOD(extack, "failed to query hardware"); + goto out_err; + } + +out: + err =3D -EMSGSIZE; + if (nla_put_u64_64bit(msg, VDPA_ATTR_DEV_NEGOTIATED_FEATURES, + mvdev->actual_features, VDPA_ATTR_PAD)) + goto out_err; + + max_vqp =3D mlx5vdpa16_to_cpu(mvdev, ndev->config.max_virtqueue_pairs); + if (nla_put_u16(msg, VDPA_ATTR_DEV_NET_CFG_MAX_VQP, max_vqp)) + goto out_err; + + if (nla_put_string(msg, VDPA_ATTR_DEV_VENDOR_ATTR_NAME, "received_desc")) + goto out_err; + + if (nla_put_u64_64bit(msg, VDPA_ATTR_DEV_VENDOR_ATTR_VALUE, received_desc, + VDPA_ATTR_PAD)) + goto out_err; + + if (nla_put_string(msg, VDPA_ATTR_DEV_VENDOR_ATTR_NAME, "completed_desc")) + goto out_err; + + if (nla_put_u64_64bit(msg, VDPA_ATTR_DEV_VENDOR_ATTR_VALUE, completed_des= c, + VDPA_ATTR_PAD)) + goto out_err; + + err =3D 0; +out_err: + mutex_unlock(&ndev->numq_lock); + return err; +} + static const struct vdpa_config_ops mlx5_vdpa_ops =3D { .set_vq_address =3D mlx5_vdpa_set_vq_address, .set_vq_num =3D mlx5_vdpa_set_vq_num, @@ -2451,6 +2613,7 @@ static const struct vdpa_config_ops mlx5_vdpa_ops =3D= { .get_vq_ready =3D mlx5_vdpa_get_vq_ready, .set_vq_state =3D mlx5_vdpa_set_vq_state, .get_vq_state =3D mlx5_vdpa_get_vq_state, + .get_vendor_vq_stats =3D mlx5_vdpa_get_vendor_vq_stats, .get_vq_notification =3D mlx5_get_vq_notification, .get_vq_irq =3D mlx5_get_vq_irq, .get_vq_align =3D mlx5_vdpa_get_vq_align, @@ -2706,6 +2869,7 @@ static int mlx5_vdpa_dev_add(struct vdpa_mgmt_dev *v_= mdev, const char *name, =20 init_mvqs(ndev); mutex_init(&ndev->reslock); + mutex_init(&ndev->numq_lock); config =3D &ndev->config; =20 if (add_config->mask & BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MTU)) { @@ -2788,6 +2952,7 @@ static int mlx5_vdpa_dev_add(struct vdpa_mgmt_dev *v_= mdev, const char *name, if (!is_zero_ether_addr(config->mac)) mlx5_mpfs_del_mac(pfmdev, config->mac); err_mtu: + mutex_destroy(&ndev->numq_lock); mutex_destroy(&ndev->reslock); err_alloc: put_device(&mvdev->vdev.dev); diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h index 49a48d7709ac..1d193d9b6029 100644 --- a/include/linux/mlx5/mlx5_ifc.h +++ b/include/linux/mlx5/mlx5_ifc.h @@ -94,6 +94,7 @@ enum { enum { MLX5_OBJ_TYPE_GENEVE_TLV_OPT =3D 0x000b, MLX5_OBJ_TYPE_VIRTIO_NET_Q =3D 0x000d, + MLX5_OBJ_TYPE_VIRTIO_Q_COUNTERS =3D 0x001c, MLX5_OBJ_TYPE_MATCH_DEFINER =3D 0x0018, MLX5_OBJ_TYPE_MKEY =3D 0xff01, MLX5_OBJ_TYPE_QP =3D 0xff02, diff --git a/include/linux/mlx5/mlx5_ifc_vdpa.h b/include/linux/mlx5/mlx5_i= fc_vdpa.h index 1a9c9d94cb59..4414ed5b6ed2 100644 --- a/include/linux/mlx5/mlx5_ifc_vdpa.h +++ b/include/linux/mlx5/mlx5_ifc_vdpa.h @@ -165,4 +165,43 @@ struct mlx5_ifc_modify_virtio_net_q_out_bits { struct mlx5_ifc_general_obj_out_cmd_hdr_bits general_obj_out_cmd_hdr; }; =20 +struct mlx5_ifc_virtio_q_counters_bits { + u8 modify_field_select[0x40]; + u8 reserved_at_40[0x40]; + u8 received_desc[0x40]; + u8 completed_desc[0x40]; + u8 error_cqes[0x20]; + u8 bad_desc_errors[0x20]; + u8 exceed_max_chain[0x20]; + u8 invalid_buffer[0x20]; + u8 reserved_at_180[0x280]; +}; + +struct mlx5_ifc_create_virtio_q_counters_in_bits { + struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr; + struct mlx5_ifc_virtio_q_counters_bits virtio_q_counters; +}; + +struct mlx5_ifc_create_virtio_q_counters_out_bits { + struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr; + struct mlx5_ifc_virtio_q_counters_bits virtio_q_counters; +}; + +struct mlx5_ifc_destroy_virtio_q_counters_in_bits { + struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr; +}; + +struct mlx5_ifc_destroy_virtio_q_counters_out_bits { + struct mlx5_ifc_general_obj_out_cmd_hdr_bits hdr; +}; + +struct mlx5_ifc_query_virtio_q_counters_in_bits { + struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr; +}; + +struct mlx5_ifc_query_virtio_q_counters_out_bits { + struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr; + struct mlx5_ifc_virtio_q_counters_bits counters; +}; + #endif /* __MLX5_IFC_VDPA_H_ */ --=20 2.35.1