From nobody Thu May 7 20:49:47 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 16091C433EF for ; Wed, 18 May 2022 13:38:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238015AbiERNiR (ORCPT ); Wed, 18 May 2022 09:38:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45338 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236417AbiERNiP (ORCPT ); Wed, 18 May 2022 09:38:15 -0400 Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2043.outbound.protection.outlook.com [40.107.223.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E228A16D10A for ; Wed, 18 May 2022 06:38:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=MTtaMSCgRP/2LDEKqGJfFIBCgST9OPUYJl0U46P1mLiMgNbhsxYtQp4Bno8Lr+RyEBjAfxPWffOgl9Sycvbn9dw+uN5rC872P/kJSv3mPRK8o5BDVgsZoaATGfWQiljtk6QKIOreJNMXwgqtj3gf89boHH6PxplJ1Fu0hNI4u7c2IrxBs3lIV9YLyuTsM9lNZGGKzU3nGK6H16JRIV2Ad3PW+VgE9xDbzYGp5iMTc6kPJNhVlmBU01Kie8aTRo3laHSU10uyTIw7bhR9QRJUj2cuNdpbgRiyWm/FM6RLdvttecsy62j5gag0Fhm7M5UnVHBdWTokGYlY+1DBAbjEVA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=hp2CQkeKufDpXiu33CxbPm9FCMw4CDft+U6WNmlxzc8=; b=gbn7DPpQy8kydvYfD2CsD0brMEq2Z3TOPPV+NRaywSJI6gMY9JQhdcIUCL6i6mLaZ6q6xtHX4/sCAesNTfY+qRKLS6oc1E08TaITUievgMsR64BQUKe35pnYi55oH/5r00acpEQsMl6G0xOuZdyMJcMrCqWidOj6twm+GiZa4yXd5Z3GtviqC/iGzNL2B8tjrUN76uX4h/7/Z8VWW0R4346qysQirFNlvzcn5CEB+B0uwm9pgIQv6eQbkgKGJmoPW91nTlJw/kqGI2HXiLnCir/zLnuBuvBweK6laB+9MXHaX9uGQA8+ejEFEh8pHAE9ymW9FkZkNcxOz3qa6kW8ag== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.234) smtp.rcpttodomain=vger.kernel.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=hp2CQkeKufDpXiu33CxbPm9FCMw4CDft+U6WNmlxzc8=; b=aN0KBwYrophwxxXSmHF3yXXDFrMYxVh0znwvrg1pVL+k8fXYd8bbHK7vgAJaQzgrn8CtIs5oGLaGsZfJfS5rbUs/rYhEwu8xVspREycQo8Swjul6sA94AFtzrAiusl/Q6n1wtnJmTzGyWDoWlMce4399oVKYlwGJAO8mwIxh7yIFki2UGAycyukHovLm5Y2n90kHmPmSP/duV9bEJ9II0KchuFQhm7xhCWcmzkD6FHaWQTXzvCo8x0nREeA7b1uOGiWLsDqbl8KXe8Bq06Y9lsApHfvSKRnk6xYNZ9NO8+OtWCNBIMFQN2lmw4d8qJyPMYtl1bwU9zB5zp1Hzt50dQ== Received: from MW4PR04CA0138.namprd04.prod.outlook.com (2603:10b6:303:84::23) by CY4PR12MB1862.namprd12.prod.outlook.com (2603:10b6:903:121::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5273.14; Wed, 18 May 2022 13:38:12 +0000 Received: from CO1NAM11FT026.eop-nam11.prod.protection.outlook.com (2603:10b6:303:84:cafe::ae) by MW4PR04CA0138.outlook.office365.com (2603:10b6:303:84::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5250.18 via Frontend Transport; Wed, 18 May 2022 13:38:12 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.234) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.234 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.234; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.234) by CO1NAM11FT026.mail.protection.outlook.com (10.13.175.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5273.14 via Frontend Transport; Wed, 18 May 2022 13:38:12 +0000 Received: from rnnvmail205.nvidia.com (10.129.68.10) by DRHQMAIL101.nvidia.com (10.27.9.10) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Wed, 18 May 2022 13:38:11 +0000 Received: from rnnvmail204.nvidia.com (10.129.68.6) by rnnvmail205.nvidia.com (10.129.68.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Wed, 18 May 2022 06:38:10 -0700 Received: from vdi.nvidia.com (10.127.8.13) by mail.nvidia.com (10.129.68.6) with Microsoft SMTP Server id 15.2.986.22 via Frontend Transport; Wed, 18 May 2022 06:38:09 -0700 From: Eli Cohen To: , CC: , , , Eli Cohen Subject: [PATCH v8 1/6] vdpa: Fix error logic in vdpa_nl_cmd_dev_get_doit Date: Wed, 18 May 2022 16:37:59 +0300 Message-ID: <20220518133804.1075129-2-elic@nvidia.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220518133804.1075129-1-elic@nvidia.com> References: <20220518133804.1075129-1-elic@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 4b180372-0f07-4b5a-ffb0-08da38d3a744 X-MS-TrafficTypeDiagnostic: CY4PR12MB1862:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: jpq+zk7mlk/TY/cQrNspzarbjjM2DdsRgHKoftzQntK8SMDfmw+g7ao0+GLAS5Eex1x8wZyzOZYKhRflSAa49XYsN6YbU0xdiEyDV1OqqwVXxcRvrHGdDcq4s3iYg82I/WXbqYXtbEPUUq+KLyYCEIR05H6Y1itqxPgui7JRK5l5ZDCiBTeWAy3+guRm2rHulcirRco5OyP7BdU/WpoNz/pRiPjL4rTQhmDstSzCShlnlmtUosS5d5X2DJkuopE8QHeXP0cLspWmj9dRcLAer08IrdRMJXsM1MXsIIG1G1VFcxVB5bQYiw+lrOJ5m9cCni4au1NrtaT8P0o7zKmY2txBh+Gafd9VM2EaxfVV6LbeBhbEoWgXFa23b8fv0hO5uYJ6muyrsnspBHjBj/OHEZ1SYjJPBRKJgUHgA/qvcnjh0ndSsTwckz7PpB3uzxKVIJV4bJu1KXdQqircMGnLvyxdDdDXestvoVa3Wa3Ead2uftWXChu4nLtbUI+jnF5/MX73g3VzDQ3PqoHdlzLb+DLGqxmH6R3aw5EiFv9X0Iy+mD/H1NJLkPsMX8bS5gUDacDpvkUM6cAhKY8CuC1gUTPhkmEp0T26Pfub1UwqVFqOZiStBrs/xQzTRae7Al01kq71MKc4mbdfMTqj/IQY8Jv9sak4ahCUxWiLjeOdvVzAwRZoc6za8QaAOQ03SM0GhUe5P/gA118LkezNJ3OrVQ== X-Forefront-Antispam-Report: CIP:12.22.5.234;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:InfoNoRecords;CAT:NONE;SFS:(13230001)(4636009)(36840700001)(46966006)(40470700004)(110136005)(40460700003)(83380400001)(36860700001)(508600001)(8936002)(54906003)(316002)(47076005)(426003)(336012)(86362001)(4326008)(82310400005)(356005)(186003)(2616005)(107886003)(1076003)(8676002)(70586007)(6666004)(70206006)(5660300002)(7696005)(2906002)(36756003)(26005)(81166007)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2022 13:38:12.0824 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 4b180372-0f07-4b5a-ffb0-08da38d3a744 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[12.22.5.234];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT026.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR12MB1862 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" In vdpa_nl_cmd_dev_get_doit(), if the call to genlmsg_reply() fails we must not call nlmsg_free() since this is done inside genlmsg_reply(). Fix it. Fixes: bc0d90ee021f ("vdpa: Enable user to query vdpa device info") Reviewed-by: Si-Wei Liu Acked-by: Jason Wang Signed-off-by: Eli Cohen --- drivers/vdpa/vdpa.c | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-) diff --git a/drivers/vdpa/vdpa.c b/drivers/vdpa/vdpa.c index 2b75c00b1005..fac89a0d8178 100644 --- a/drivers/vdpa/vdpa.c +++ b/drivers/vdpa/vdpa.c @@ -756,14 +756,19 @@ static int vdpa_nl_cmd_dev_get_doit(struct sk_buff *s= kb, struct genl_info *info) goto mdev_err; } err =3D vdpa_dev_fill(vdev, msg, info->snd_portid, info->snd_seq, 0, info= ->extack); - if (!err) - err =3D genlmsg_reply(msg, info); + if (err) + goto mdev_err; + + err =3D genlmsg_reply(msg, info); + put_device(dev); + mutex_unlock(&vdpa_dev_mutex); + return err; + mdev_err: put_device(dev); err: mutex_unlock(&vdpa_dev_mutex); - if (err) - nlmsg_free(msg); + nlmsg_free(msg); return err; } =20 --=20 2.35.1 From nobody Thu May 7 20:49:47 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6B7E1C433F5 for ; Wed, 18 May 2022 13:38:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238033AbiERNil (ORCPT ); Wed, 18 May 2022 09:38:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45912 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238032AbiERNiV (ORCPT ); Wed, 18 May 2022 09:38:21 -0400 Received: from NAM02-BN1-obe.outbound.protection.outlook.com (mail-bn1nam07on2074.outbound.protection.outlook.com [40.107.212.74]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5B65C16D5D2 for ; Wed, 18 May 2022 06:38:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=UjLuq+p2QZGRNpod/86jVt6HDVteergDqGv8TpxgjOTpvrXW/irMWTlRc03VSS45BlRXq8OP4Avm7stQNT9oL8pjLXvOWjFIspHTtVkGPZAJVfIXpS8ozFoY+ftZuHOHsYvWV0VGjRcKvEFybEarwXyu3k/ugqDD2L1UBKsu07493hAbYRSm2THM7dDJ0t86hr2dpWmUcagyRe3vnhAmnhx2w31D8DJSQaJcImhMAVpZP3YDAViBwPwP9H/HZf/bhMSPZbw0DGIiI4EibWGUdldvMQsW0ULp4858ENmDZaV7Wk8ITJbu5Uh09MzQ+10bLHetj3fb8aa+S5PBAvd8OQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=l+K6boUsXuLmQ7mC2iEhGF8/S2SUg4HzE/UKMfFSPKQ=; b=ZqI7q1ONzcsR9cRrxcVV/IQO+ZlVcYMoexPp6WoNReOdu9UOX8JTtWQzELauPgbU1NbFP77pz/O9ouxkFTEeFnYG4oEhljKJ/X+6+vT78Cbtfhf/88bVhqRvTDqYSxWJS3L96JUxjl02gvooOtJBrCIa8y2YbUiD0V19ZcOGaEKTnhP8hMkyCm8GQEuS+fgUJKLY4KXiYZvQegRC7eUgyfxiHW8kP35yt8s2NmRX8XwLPtOJEjtB3CY9HcBi2Be4pPrRZCL6eyP5Nyi485jBCTkSqxYz5/qOz5WWif8SOAW7tlRNsy5BRZyY+YcbftBHrSVqGFrvRLd3ekuP+faPnA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.235) smtp.rcpttodomain=vger.kernel.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=l+K6boUsXuLmQ7mC2iEhGF8/S2SUg4HzE/UKMfFSPKQ=; b=DkRlUA4BCYbpjzpt2f2ia4SsjqmiKh//IXLnGMio0+KHgBBLYw5HGp8scHsXHFKi/fD7OJnkHMJvAa4pT5tYyjrWUGWQuOzsOLYMlid3c3JBUuaw+63AaiLIsAA8eGTh7H3zHKq37fY/HfyGidgLyhNKFY2dXubIltHoMNyaoM0tOuR1xuo7Rf1KnolNr8He9tFPvI7qT/HbelhK0NsKkOH0QWlELAdX8uMcseqCbMjy5jhGJziZfqJhhEpjL4TcEtE4OQTAY68o0/hKNXgPhkaJmO2vbGoU5B+yHOnE+lOSvrom0aAb1SYO9xNimeCJnXZ/uKW234ZWYZanWmTJLA== Received: from BN9PR03CA0455.namprd03.prod.outlook.com (2603:10b6:408:139::10) by CY4PR1201MB0230.namprd12.prod.outlook.com (2603:10b6:910:1e::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5250.18; Wed, 18 May 2022 13:38:16 +0000 Received: from BN8NAM11FT057.eop-nam11.prod.protection.outlook.com (2603:10b6:408:139:cafe::4) by BN9PR03CA0455.outlook.office365.com (2603:10b6:408:139::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5273.14 via Frontend Transport; Wed, 18 May 2022 13:38:16 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.235) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.235 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.235; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.235) by BN8NAM11FT057.mail.protection.outlook.com (10.13.177.49) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5273.14 via Frontend Transport; Wed, 18 May 2022 13:38:15 +0000 Received: from rnnvmail204.nvidia.com (10.129.68.6) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Wed, 18 May 2022 13:38:13 +0000 Received: from rnnvmail204.nvidia.com (10.129.68.6) by rnnvmail204.nvidia.com (10.129.68.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Wed, 18 May 2022 06:38:13 -0700 Received: from vdi.nvidia.com (10.127.8.13) by mail.nvidia.com (10.129.68.6) with Microsoft SMTP Server id 15.2.986.22 via Frontend Transport; Wed, 18 May 2022 06:38:11 -0700 From: Eli Cohen To: , CC: , , , Eli Cohen Subject: [PATCH v8 2/6] vdpa: Add support for querying vendor statistics Date: Wed, 18 May 2022 16:38:00 +0300 Message-ID: <20220518133804.1075129-3-elic@nvidia.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220518133804.1075129-1-elic@nvidia.com> References: <20220518133804.1075129-1-elic@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: a5dd4afb-3792-4866-feec-08da38d3a989 X-MS-TrafficTypeDiagnostic: CY4PR1201MB0230:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: PAGE5hA5A4w95Xe941wZpkqFJoIGhGmNl1690x/s7gJoXfTXUzFtgk3RYD/vhKObdjXR1VotJT65zKIdFBRN2busgtFjJ/CmXZXlabVnW0tFtBJYUevvIiTx+OkCOJKpymrdLVkYa5fu5G18dUedSihvQThL6wCOgJrKOI9qJ+QN2HkX74iZOBxJYPg0QMAbTmUT74T6BjPbqY89Fb4j6u45uCee2dp9WHkww5gFpES7ULSYa5GURkDyL0SgobsUi17E5k1bXSGLGfqmABczK5NHhnqFEWxNYSQTD23tBY5m/kZIof2y/pMV5xqKUu7rBJZJjOv6HnJ50NCRUgjlV0eB+i9iBEkcMn20Y/RbaDL/9JtYu0BRLkLXuPJ1bSJpul9qhl+ivMNgLgoxPbhN6IxDYvdoc6QT4I2GaHCs0kVQinR62X1kUzn5Qy9szIC0RAjOIPQ/kBnh4npZ/6f3AL1yAV3eUgGLKl+Sr1mpQtUYO2V+N5nqIWmItSbw+7r3QpEYd2riHuLca51MmNyH10JYCayQk7ll1FmPaifao2F8YMaETnSnCz3r5hAsZPn2Ov/XpPav9okAa4l6aG3n3mT7O9SZcOLlXehbzdXFDMOifPRrRNeeeWBuEtHetAf+T6F9UUGMnclg1y5FxmApCs1CnvWVDropt5Ced6rY0NIbiQCuIx6204r03DJEVQ07XYWM6DP4g1Xb/PuQUvecCQ== X-Forefront-Antispam-Report: CIP:12.22.5.235;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:InfoNoRecords;CAT:NONE;SFS:(13230001)(4636009)(36840700001)(46966006)(40470700004)(81166007)(356005)(82310400005)(70206006)(186003)(8936002)(7696005)(86362001)(316002)(8676002)(4326008)(83380400001)(336012)(47076005)(426003)(1076003)(6666004)(110136005)(508600001)(36756003)(54906003)(26005)(2616005)(5660300002)(2906002)(70586007)(40460700003)(36860700001)(107886003)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2022 13:38:15.8119 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: a5dd4afb-3792-4866-feec-08da38d3a989 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[12.22.5.235];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT057.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR1201MB0230 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Allows to read vendor statistics of a vdpa device. The specific statistics data are received from the upstream driver in the form of an (attribute name, attribute value) pairs. An example of statistics for mlx5_vdpa device are: received_desc - number of descriptors received by the virtqueue completed_desc - number of descriptors completed by the virtqueue A descriptor using indirect buffers is still counted as 1. In addition, N chained descriptors are counted correctly N times as one would expect. A new callback was added to vdpa_config_ops which provides the means for the vdpa driver to return statistics results. The interface allows for reading all the supported virtqueues, including the control virtqueue if it exists. Below are some examples taken from mlx5_vdpa which are introduced in the following patch: 1. Read statistics for the virtqueue at index 1 $ vdpa dev vstats show vdpa-a qidx 1 vdpa-a: queue_type tx queue_index 1 received_desc 3844836 completed_desc 3844836 2. Read statistics for the virtqueue at index 32 $ vdpa dev vstats show vdpa-a qidx 32 vdpa-a: queue_type control_vq queue_index 32 received_desc 62 completed_desc 62 3. Read statisitics for the virtqueue at index 0 with json output $ vdpa -j dev vstats show vdpa-a qidx 0 {"vstats":{"vdpa-a":{ "queue_type":"rx","queue_index":0,"name":"received_desc","value":417776,\ "name":"completed_desc","value":417548}}} 4. Read statistics for the virtqueue at index 0 with preety json output $ vdpa -jp dev vstats show vdpa-a qidx 0 { "vstats": { "vdpa-a": { "queue_type": "rx", "queue_index": 0, "name": "received_desc", "value": 417776, "name": "completed_desc", "value": 417548 } } } Signed-off-by: Eli Cohen --- drivers/vdpa/vdpa.c | 162 ++++++++++++++++++++++++++++++++++++++ include/linux/vdpa.h | 3 + include/uapi/linux/vdpa.h | 6 ++ 3 files changed, 171 insertions(+) diff --git a/drivers/vdpa/vdpa.c b/drivers/vdpa/vdpa.c index fac89a0d8178..31b5eb2c0778 100644 --- a/drivers/vdpa/vdpa.c +++ b/drivers/vdpa/vdpa.c @@ -914,6 +914,108 @@ vdpa_dev_config_fill(struct vdpa_device *vdev, struct= sk_buff *msg, u32 portid, return err; } =20 +static int vdpa_fill_stats_rec(struct vdpa_device *vdev, struct sk_buff *m= sg, + struct genl_info *info, u32 index) +{ + struct virtio_net_config config =3D {}; + u64 features; + u16 max_vqp; + u8 status; + int err; + + status =3D vdev->config->get_status(vdev); + if (!(status & VIRTIO_CONFIG_S_FEATURES_OK)) { + NL_SET_ERR_MSG_MOD(info->extack, "feature negotiation not complete"); + return -EAGAIN; + } + vdpa_get_config_unlocked(vdev, 0, &config, sizeof(config)); + + max_vqp =3D le16_to_cpu(config.max_virtqueue_pairs); + if (nla_put_u16(msg, VDPA_ATTR_DEV_NET_CFG_MAX_VQP, max_vqp)) + return -EMSGSIZE; + + features =3D vdev->config->get_driver_features(vdev); + if (nla_put_u64_64bit(msg, VDPA_ATTR_DEV_NEGOTIATED_FEATURES, + features, VDPA_ATTR_PAD)) + return -EMSGSIZE; + + if (nla_put_u32(msg, VDPA_ATTR_DEV_QUEUE_INDEX, index)) + return -EMSGSIZE; + + err =3D vdev->config->get_vendor_vq_stats(vdev, index, msg, info->extack); + if (err) + return err; + + return 0; +} + +static int vendor_stats_fill(struct vdpa_device *vdev, struct sk_buff *msg, + struct genl_info *info, u32 index) +{ + int err; + + mutex_lock(&vdev->cf_mutex); + if (!vdev->config->get_vendor_vq_stats) { + err =3D -EOPNOTSUPP; + goto out; + } + + err =3D vdpa_fill_stats_rec(vdev, msg, info, index); +out: + mutex_unlock(&vdev->cf_mutex); + return err; +} + +static int vdpa_dev_vendor_stats_fill(struct vdpa_device *vdev, + struct sk_buff *msg, + struct genl_info *info, u32 index) +{ + u32 device_id; + void *hdr; + int err; + u32 portid =3D info->snd_portid; + u32 seq =3D info->snd_seq; + u32 flags =3D 0; + + hdr =3D genlmsg_put(msg, portid, seq, &vdpa_nl_family, flags, + VDPA_CMD_DEV_VSTATS_GET); + if (!hdr) + return -EMSGSIZE; + + if (nla_put_string(msg, VDPA_ATTR_DEV_NAME, dev_name(&vdev->dev))) { + err =3D -EMSGSIZE; + goto undo_msg; + } + + device_id =3D vdev->config->get_device_id(vdev); + if (nla_put_u32(msg, VDPA_ATTR_DEV_ID, device_id)) { + err =3D -EMSGSIZE; + goto undo_msg; + } + + switch (device_id) { + case VIRTIO_ID_NET: + if (index > VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MAX) { + NL_SET_ERR_MSG_MOD(info->extack, "queue index excceeds max value"); + err =3D -ERANGE; + break; + } + + err =3D vendor_stats_fill(vdev, msg, info, index); + break; + default: + err =3D -EOPNOTSUPP; + break; + } + genlmsg_end(msg, hdr); + + return err; + +undo_msg: + genlmsg_cancel(msg, hdr); + return err; +} + static int vdpa_nl_cmd_dev_config_get_doit(struct sk_buff *skb, struct gen= l_info *info) { struct vdpa_device *vdev; @@ -995,6 +1097,60 @@ vdpa_nl_cmd_dev_config_get_dumpit(struct sk_buff *msg= , struct netlink_callback * return msg->len; } =20 +static int vdpa_nl_cmd_dev_stats_get_doit(struct sk_buff *skb, + struct genl_info *info) +{ + struct vdpa_device *vdev; + struct sk_buff *msg; + const char *devname; + struct device *dev; + u32 index; + int err; + + if (!info->attrs[VDPA_ATTR_DEV_NAME]) + return -EINVAL; + + if (!info->attrs[VDPA_ATTR_DEV_QUEUE_INDEX]) + return -EINVAL; + + devname =3D nla_data(info->attrs[VDPA_ATTR_DEV_NAME]); + msg =3D nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL); + if (!msg) + return -ENOMEM; + + index =3D nla_get_u32(info->attrs[VDPA_ATTR_DEV_QUEUE_INDEX]); + mutex_lock(&vdpa_dev_mutex); + dev =3D bus_find_device(&vdpa_bus, NULL, devname, vdpa_name_match); + if (!dev) { + NL_SET_ERR_MSG_MOD(info->extack, "device not found"); + err =3D -ENODEV; + goto dev_err; + } + vdev =3D container_of(dev, struct vdpa_device, dev); + if (!vdev->mdev) { + NL_SET_ERR_MSG_MOD(info->extack, "unmanaged vdpa device"); + err =3D -EINVAL; + goto mdev_err; + } + err =3D vdpa_dev_vendor_stats_fill(vdev, msg, info, index); + if (err) + goto mdev_err; + + err =3D genlmsg_reply(msg, info); + + put_device(dev); + mutex_unlock(&vdpa_dev_mutex); + + return err; + +mdev_err: + put_device(dev); +dev_err: + nlmsg_free(msg); + mutex_unlock(&vdpa_dev_mutex); + return err; +} + static const struct nla_policy vdpa_nl_policy[VDPA_ATTR_MAX + 1] =3D { [VDPA_ATTR_MGMTDEV_BUS_NAME] =3D { .type =3D NLA_NUL_STRING }, [VDPA_ATTR_MGMTDEV_DEV_NAME] =3D { .type =3D NLA_STRING }, @@ -1035,6 +1191,12 @@ static const struct genl_ops vdpa_nl_ops[] =3D { .doit =3D vdpa_nl_cmd_dev_config_get_doit, .dumpit =3D vdpa_nl_cmd_dev_config_get_dumpit, }, + { + .cmd =3D VDPA_CMD_DEV_VSTATS_GET, + .validate =3D GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP, + .doit =3D vdpa_nl_cmd_dev_stats_get_doit, + .flags =3D GENL_ADMIN_PERM, + }, }; =20 static struct genl_family vdpa_nl_family __ro_after_init =3D { diff --git a/include/linux/vdpa.h b/include/linux/vdpa.h index 8943a209202e..2ae8443331e1 100644 --- a/include/linux/vdpa.h +++ b/include/linux/vdpa.h @@ -276,6 +276,9 @@ struct vdpa_config_ops { const struct vdpa_vq_state *state); int (*get_vq_state)(struct vdpa_device *vdev, u16 idx, struct vdpa_vq_state *state); + int (*get_vendor_vq_stats)(struct vdpa_device *vdev, u16 idx, + struct sk_buff *msg, + struct netlink_ext_ack *extack); struct vdpa_notification_area (*get_vq_notification)(struct vdpa_device *vdev, u16 idx); /* vq irq is not expected to be changed once DRIVER_OK is set */ diff --git a/include/uapi/linux/vdpa.h b/include/uapi/linux/vdpa.h index 1061d8d2d09d..25c55cab3d7c 100644 --- a/include/uapi/linux/vdpa.h +++ b/include/uapi/linux/vdpa.h @@ -18,6 +18,7 @@ enum vdpa_command { VDPA_CMD_DEV_DEL, VDPA_CMD_DEV_GET, /* can dump */ VDPA_CMD_DEV_CONFIG_GET, /* can dump */ + VDPA_CMD_DEV_VSTATS_GET, }; =20 enum vdpa_attr { @@ -46,6 +47,11 @@ enum vdpa_attr { VDPA_ATTR_DEV_NEGOTIATED_FEATURES, /* u64 */ VDPA_ATTR_DEV_MGMTDEV_MAX_VQS, /* u32 */ VDPA_ATTR_DEV_SUPPORTED_FEATURES, /* u64 */ + + VDPA_ATTR_DEV_QUEUE_INDEX, /* u32 */ + VDPA_ATTR_DEV_VENDOR_ATTR_NAME, /* string */ + VDPA_ATTR_DEV_VENDOR_ATTR_VALUE, /* u64 */ + /* new attributes must be added above here */ VDPA_ATTR_MAX, }; --=20 2.35.1 From nobody Thu May 7 20:49:47 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0D346C433F5 for ; Wed, 18 May 2022 13:38:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238062AbiERNig (ORCPT ); Wed, 18 May 2022 09:38:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45932 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238035AbiERNiV (ORCPT ); Wed, 18 May 2022 09:38:21 -0400 Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2050.outbound.protection.outlook.com [40.107.93.50]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5BFCC16F900 for ; Wed, 18 May 2022 06:38:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=RvOuhfA0UcehccA/jUx69U0CCC1nAVCytxv8OoibtFs6YoR8xoxtb2ciQ0iwSI0ffDu1FO2doEiPzi1bBmybmnA4oIfhs/FRJB5QTyfy90AtLTxR/3KtmY/2E1vyGXhGmyS5L3801pGkAfPZAElUCspAx3DSMNlUUYFsOvWawRB3gw/Pg6wQIsVvu3qWBFfB0rL/L1nlQaMiXwrJO3InEJ+a80zDz/1ep7NGwV9K92jtEaq3GaIPCRMCmGqQeudaYbdUS+KUZo/i1eQqqyGiFNAuci9KSpahxvdYqGyy3pT3Qro1g6eO86n7qIn6ilVFtFbfOjY7kqJDSnQO1ev5Rg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=GrHGLJT9Qh1aJ51+qIXB1anzlEgXHR/GNVOAVeDnpaE=; b=KaUr3MEIQsnY8iUxe7REtYkBSYijdp80fdyNSYr1lUTA3enQUu88TxcxHuwR2D4r8D6BjzQQTcJXEykefGknnVmKpi6A54tZ+2BUTHh641uAX6/Z0s6+Vf9kKGLrniqNkiCdXAQJOmK0BmzAnvi2FPgKbnSw/8uW7BejnYtLPVcl8t7Z62eDi7Hs4SBV2mYroqwTxfOcWq7uVIV+YGamnIUL7/EdHgbXaUTkLHUHlTcNlnKS4qXIgcMx61PCyCkJkBReoY3WI7bDDfMtlbwyuMgqFu1MnS+sttlN5WQjDXkgrqnT7mjHdAtiPyYeRMrpFuPDfClQ22IyvFqUTLGTHg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.235) smtp.rcpttodomain=vger.kernel.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=GrHGLJT9Qh1aJ51+qIXB1anzlEgXHR/GNVOAVeDnpaE=; b=hc37T+yJOZWI9fNCWrPSw0fESYEANYORPAa/nYuV9rcCsv9f9yyzFrMVO0VdDlq4Ntl856aUKDVUbkpPfZCVdKFSnrBOHaG/heaWn9NFbCzLtWjoINgGm1OVfAgDMAT0CELvAfPTuYCTjFGFYIM7yqruz18Kf8XXlXAZJf3lBhHaILLllN6ST9SnIhBnKM79ijleYyd7Sk/AohvMQmB4JrKIU80qUkE3nsmEDg4tzsnBaOFt1U1QC4Qa4yPSn1SA5oOn3v/k9rlI6vFswGoEvZrNR3JP0XIVYEr7GfkojZNSPjc0DihbjoqCU/hcVM0OiiVvJudhbcmwKK2s8S1O+g== Received: from BN9PR03CA0461.namprd03.prod.outlook.com (2603:10b6:408:139::16) by MW2PR12MB2361.namprd12.prod.outlook.com (2603:10b6:907:7::32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5273.14; Wed, 18 May 2022 13:38:17 +0000 Received: from BN8NAM11FT057.eop-nam11.prod.protection.outlook.com (2603:10b6:408:139:cafe::64) by BN9PR03CA0461.outlook.office365.com (2603:10b6:408:139::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5250.16 via Frontend Transport; Wed, 18 May 2022 13:38:16 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.235) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.235 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.235; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.235) by BN8NAM11FT057.mail.protection.outlook.com (10.13.177.49) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5273.14 via Frontend Transport; Wed, 18 May 2022 13:38:16 +0000 Received: from rnnvmail205.nvidia.com (10.129.68.10) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Wed, 18 May 2022 13:38:15 +0000 Received: from rnnvmail204.nvidia.com (10.129.68.6) by rnnvmail205.nvidia.com (10.129.68.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Wed, 18 May 2022 06:38:15 -0700 Received: from vdi.nvidia.com (10.127.8.13) by mail.nvidia.com (10.129.68.6) with Microsoft SMTP Server id 15.2.986.22 via Frontend Transport; Wed, 18 May 2022 06:38:13 -0700 From: Eli Cohen To: , CC: , , , Eli Cohen Subject: [PATCH v8 3/6] net/vdpa: Use readers/writers semaphore instead of vdpa_dev_mutex Date: Wed, 18 May 2022 16:38:01 +0300 Message-ID: <20220518133804.1075129-4-elic@nvidia.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220518133804.1075129-1-elic@nvidia.com> References: <20220518133804.1075129-1-elic@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: e0d684cd-f073-4875-a90e-08da38d3a9f7 X-MS-TrafficTypeDiagnostic: MW2PR12MB2361:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: orbuvbv/tyGNrhTkBu9F06XpBkf8iaWG22hhvePmx2UkPy3M+AkiK2kpqwpG/0K4DztoxFWwPhFQaroEsKI+mh3W0DXwJgSfDrVfAJXj8wTd4JJb//WkRePZ0pDXotver3I8PDvMOd3etua8/7oL4xWGmNObGS+BHNoSCCegTBG1Y3CxzlqOrpl1tQRneH/VzugQ0D1dJTmrAF+SvRs5di9zXZhM83sP1h1lMImncXph7/vha9nt2ejMtWLTOJomr7cdNMeqIkA1aWFf1mxKE9rs8e6pnpJuYMc9+1uiqiAZWJfHecXGjdxO3YXSlEXYUE43d+2059IkkHOQInoYkkApLyy9MsaQ1kmi8lyStFe9025JD1Vq3Unl9cwxerX6+nCZwarRa9zjCOE2PikRFQqR8HMiNzEkX0e05y0AoYr/uxVuBj5nVtmzQh6KOk26QrhA3gybn+I5nmN85PbMtNkrsdJxNQk47juXMqBDQR72yI4c5A82BD56inJv/pxgtn7ic7og1Mv/Dxr4s0M+gnq5nX6xaPyP/DwhFXuCGRfliPLl6L0crv9+OCmrIomMvuVYlQp+HwaZohw3Z7LMMOoJMkvrKUMHulesYJWeIx56BzR6e832wsUqyIDH9IqMYM0weLBAO28WZmFFuhyNs1mDEyiWhvuavpivSTAGFGcBm7tLkmU9+DS5PLHHHXs0AhaxzJ4F/k+uuv6GqgB9Ew== X-Forefront-Antispam-Report: CIP:12.22.5.235;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:InfoNoRecords;CAT:NONE;SFS:(13230001)(4636009)(36840700001)(46966006)(40470700004)(82310400005)(40460700003)(186003)(336012)(426003)(47076005)(83380400001)(356005)(81166007)(7696005)(36860700001)(1076003)(70206006)(6666004)(110136005)(54906003)(316002)(70586007)(26005)(8676002)(508600001)(2906002)(2616005)(107886003)(5660300002)(4326008)(86362001)(8936002)(36756003)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2022 13:38:16.5462 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: e0d684cd-f073-4875-a90e-08da38d3a9f7 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[12.22.5.235];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT057.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW2PR12MB2361 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Use rw_semaphore instead of mutex to control access to vdpa devices. This can be especially beneficial in case processes poll on statistics information. Suggested-by: Si-Wei Liu Reviewed-by: Si-Wei Liu Acked-by: Jason Wang Signed-off-by: Eli Cohen --- drivers/vdpa/vdpa.c | 64 ++++++++++++++++++++++----------------------- 1 file changed, 32 insertions(+), 32 deletions(-) diff --git a/drivers/vdpa/vdpa.c b/drivers/vdpa/vdpa.c index 31b5eb2c0778..2ff7de5e6b2f 100644 --- a/drivers/vdpa/vdpa.c +++ b/drivers/vdpa/vdpa.c @@ -18,7 +18,7 @@ =20 static LIST_HEAD(mdev_head); /* A global mutex that protects vdpa management device and device level op= erations. */ -static DEFINE_MUTEX(vdpa_dev_mutex); +static DECLARE_RWSEM(vdpa_dev_lock); static DEFINE_IDA(vdpa_index_ida); =20 void vdpa_set_status(struct vdpa_device *vdev, u8 status) @@ -238,7 +238,7 @@ static int __vdpa_register_device(struct vdpa_device *v= dev, u32 nvqs) =20 vdev->nvqs =3D nvqs; =20 - lockdep_assert_held(&vdpa_dev_mutex); + lockdep_assert_held(&vdpa_dev_lock); dev =3D bus_find_device(&vdpa_bus, NULL, dev_name(&vdev->dev), vdpa_name_= match); if (dev) { put_device(dev); @@ -278,9 +278,9 @@ int vdpa_register_device(struct vdpa_device *vdev, u32 = nvqs) { int err; =20 - mutex_lock(&vdpa_dev_mutex); + down_write(&vdpa_dev_lock); err =3D __vdpa_register_device(vdev, nvqs); - mutex_unlock(&vdpa_dev_mutex); + up_write(&vdpa_dev_lock); return err; } EXPORT_SYMBOL_GPL(vdpa_register_device); @@ -293,7 +293,7 @@ EXPORT_SYMBOL_GPL(vdpa_register_device); */ void _vdpa_unregister_device(struct vdpa_device *vdev) { - lockdep_assert_held(&vdpa_dev_mutex); + lockdep_assert_held(&vdpa_dev_lock); WARN_ON(!vdev->mdev); device_unregister(&vdev->dev); } @@ -305,9 +305,9 @@ EXPORT_SYMBOL_GPL(_vdpa_unregister_device); */ void vdpa_unregister_device(struct vdpa_device *vdev) { - mutex_lock(&vdpa_dev_mutex); + down_write(&vdpa_dev_lock); device_unregister(&vdev->dev); - mutex_unlock(&vdpa_dev_mutex); + up_write(&vdpa_dev_lock); } EXPORT_SYMBOL_GPL(vdpa_unregister_device); =20 @@ -352,9 +352,9 @@ int vdpa_mgmtdev_register(struct vdpa_mgmt_dev *mdev) return -EINVAL; =20 INIT_LIST_HEAD(&mdev->list); - mutex_lock(&vdpa_dev_mutex); + down_write(&vdpa_dev_lock); list_add_tail(&mdev->list, &mdev_head); - mutex_unlock(&vdpa_dev_mutex); + up_write(&vdpa_dev_lock); return 0; } EXPORT_SYMBOL_GPL(vdpa_mgmtdev_register); @@ -371,14 +371,14 @@ static int vdpa_match_remove(struct device *dev, void= *data) =20 void vdpa_mgmtdev_unregister(struct vdpa_mgmt_dev *mdev) { - mutex_lock(&vdpa_dev_mutex); + down_write(&vdpa_dev_lock); =20 list_del(&mdev->list); =20 /* Filter out all the entries belong to this management device and delete= it. */ bus_for_each_dev(&vdpa_bus, NULL, mdev, vdpa_match_remove); =20 - mutex_unlock(&vdpa_dev_mutex); + up_write(&vdpa_dev_lock); } EXPORT_SYMBOL_GPL(vdpa_mgmtdev_unregister); =20 @@ -532,17 +532,17 @@ static int vdpa_nl_cmd_mgmtdev_get_doit(struct sk_buf= f *skb, struct genl_info *i if (!msg) return -ENOMEM; =20 - mutex_lock(&vdpa_dev_mutex); + down_read(&vdpa_dev_lock); mdev =3D vdpa_mgmtdev_get_from_attr(info->attrs); if (IS_ERR(mdev)) { - mutex_unlock(&vdpa_dev_mutex); + up_read(&vdpa_dev_lock); NL_SET_ERR_MSG_MOD(info->extack, "Fail to find the specified mgmt device= "); err =3D PTR_ERR(mdev); goto out; } =20 err =3D vdpa_mgmtdev_fill(mdev, msg, info->snd_portid, info->snd_seq, 0); - mutex_unlock(&vdpa_dev_mutex); + up_read(&vdpa_dev_lock); if (err) goto out; err =3D genlmsg_reply(msg, info); @@ -561,7 +561,7 @@ vdpa_nl_cmd_mgmtdev_get_dumpit(struct sk_buff *msg, str= uct netlink_callback *cb) int idx =3D 0; int err; =20 - mutex_lock(&vdpa_dev_mutex); + down_read(&vdpa_dev_lock); list_for_each_entry(mdev, &mdev_head, list) { if (idx < start) { idx++; @@ -574,7 +574,7 @@ vdpa_nl_cmd_mgmtdev_get_dumpit(struct sk_buff *msg, str= uct netlink_callback *cb) idx++; } out: - mutex_unlock(&vdpa_dev_mutex); + up_read(&vdpa_dev_lock); cb->args[0] =3D idx; return msg->len; } @@ -627,7 +627,7 @@ static int vdpa_nl_cmd_dev_add_set_doit(struct sk_buff = *skb, struct genl_info *i !netlink_capable(skb, CAP_NET_ADMIN)) return -EPERM; =20 - mutex_lock(&vdpa_dev_mutex); + down_write(&vdpa_dev_lock); mdev =3D vdpa_mgmtdev_get_from_attr(info->attrs); if (IS_ERR(mdev)) { NL_SET_ERR_MSG_MOD(info->extack, "Fail to find the specified management = device"); @@ -643,7 +643,7 @@ static int vdpa_nl_cmd_dev_add_set_doit(struct sk_buff = *skb, struct genl_info *i =20 err =3D mdev->ops->dev_add(mdev, name, &config); err: - mutex_unlock(&vdpa_dev_mutex); + up_write(&vdpa_dev_lock); return err; } =20 @@ -659,7 +659,7 @@ static int vdpa_nl_cmd_dev_del_set_doit(struct sk_buff = *skb, struct genl_info *i return -EINVAL; name =3D nla_data(info->attrs[VDPA_ATTR_DEV_NAME]); =20 - mutex_lock(&vdpa_dev_mutex); + down_write(&vdpa_dev_lock); dev =3D bus_find_device(&vdpa_bus, NULL, name, vdpa_name_match); if (!dev) { NL_SET_ERR_MSG_MOD(info->extack, "device not found"); @@ -677,7 +677,7 @@ static int vdpa_nl_cmd_dev_del_set_doit(struct sk_buff = *skb, struct genl_info *i mdev_err: put_device(dev); dev_err: - mutex_unlock(&vdpa_dev_mutex); + up_write(&vdpa_dev_lock); return err; } =20 @@ -743,7 +743,7 @@ static int vdpa_nl_cmd_dev_get_doit(struct sk_buff *skb= , struct genl_info *info) if (!msg) return -ENOMEM; =20 - mutex_lock(&vdpa_dev_mutex); + down_read(&vdpa_dev_lock); dev =3D bus_find_device(&vdpa_bus, NULL, devname, vdpa_name_match); if (!dev) { NL_SET_ERR_MSG_MOD(info->extack, "device not found"); @@ -761,13 +761,13 @@ static int vdpa_nl_cmd_dev_get_doit(struct sk_buff *s= kb, struct genl_info *info) =20 err =3D genlmsg_reply(msg, info); put_device(dev); - mutex_unlock(&vdpa_dev_mutex); + up_read(&vdpa_dev_lock); return err; =20 mdev_err: put_device(dev); err: - mutex_unlock(&vdpa_dev_mutex); + up_read(&vdpa_dev_lock); nlmsg_free(msg); return err; } @@ -809,9 +809,9 @@ static int vdpa_nl_cmd_dev_get_dumpit(struct sk_buff *m= sg, struct netlink_callba info.start_idx =3D cb->args[0]; info.idx =3D 0; =20 - mutex_lock(&vdpa_dev_mutex); + down_read(&vdpa_dev_lock); bus_for_each_dev(&vdpa_bus, NULL, &info, vdpa_dev_dump); - mutex_unlock(&vdpa_dev_mutex); + up_read(&vdpa_dev_lock); cb->args[0] =3D info.idx; return msg->len; } @@ -1031,7 +1031,7 @@ static int vdpa_nl_cmd_dev_config_get_doit(struct sk_= buff *skb, struct genl_info if (!msg) return -ENOMEM; =20 - mutex_lock(&vdpa_dev_mutex); + down_read(&vdpa_dev_lock); dev =3D bus_find_device(&vdpa_bus, NULL, devname, vdpa_name_match); if (!dev) { NL_SET_ERR_MSG_MOD(info->extack, "device not found"); @@ -1052,7 +1052,7 @@ static int vdpa_nl_cmd_dev_config_get_doit(struct sk_= buff *skb, struct genl_info mdev_err: put_device(dev); dev_err: - mutex_unlock(&vdpa_dev_mutex); + up_read(&vdpa_dev_lock); if (err) nlmsg_free(msg); return err; @@ -1090,9 +1090,9 @@ vdpa_nl_cmd_dev_config_get_dumpit(struct sk_buff *msg= , struct netlink_callback * info.start_idx =3D cb->args[0]; info.idx =3D 0; =20 - mutex_lock(&vdpa_dev_mutex); + down_read(&vdpa_dev_lock); bus_for_each_dev(&vdpa_bus, NULL, &info, vdpa_dev_config_dump); - mutex_unlock(&vdpa_dev_mutex); + up_read(&vdpa_dev_lock); cb->args[0] =3D info.idx; return msg->len; } @@ -1119,7 +1119,7 @@ static int vdpa_nl_cmd_dev_stats_get_doit(struct sk_b= uff *skb, return -ENOMEM; =20 index =3D nla_get_u32(info->attrs[VDPA_ATTR_DEV_QUEUE_INDEX]); - mutex_lock(&vdpa_dev_mutex); + down_read(&vdpa_dev_lock); dev =3D bus_find_device(&vdpa_bus, NULL, devname, vdpa_name_match); if (!dev) { NL_SET_ERR_MSG_MOD(info->extack, "device not found"); @@ -1139,7 +1139,7 @@ static int vdpa_nl_cmd_dev_stats_get_doit(struct sk_b= uff *skb, err =3D genlmsg_reply(msg, info); =20 put_device(dev); - mutex_unlock(&vdpa_dev_mutex); + up_read(&vdpa_dev_lock); =20 return err; =20 @@ -1147,7 +1147,7 @@ static int vdpa_nl_cmd_dev_stats_get_doit(struct sk_b= uff *skb, put_device(dev); dev_err: nlmsg_free(msg); - mutex_unlock(&vdpa_dev_mutex); + up_read(&vdpa_dev_lock); return err; } =20 --=20 2.35.1 From nobody Thu May 7 20:49:47 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A591EC433F5 for ; Wed, 18 May 2022 13:38:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238046AbiERNi1 (ORCPT ); Wed, 18 May 2022 09:38:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46084 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238040AbiERNiX (ORCPT ); Wed, 18 May 2022 09:38:23 -0400 Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2059.outbound.protection.outlook.com [40.107.243.59]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7295C16F922 for ; Wed, 18 May 2022 06:38:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=dJ+oDTe3Ll9MMT9BBYeS3ARHYdslioJ0Fo4UZeSzoH04YcQg1aPE0JkKClcm7h9lG5o88/xme/2feMDSOZvQGMR54/EtcNgtOuglmnQPhVRJpczoKnii3ObF+XM3Ux2Eigack2P0S+dDU8opJd6wvfIQ1L0HreJHzJFm4+7cCAB+XH1ggf1BefgkY4T7JXVAssh40ng6ED3X+vJ856qzTOpSKPG2qHTnX1lkbZFO+FTzBuilEC5wHs0JZK1OlDBxXTG3f8lwxAlBmXLnn0tmATcRAF6ZYfxQnkeRVkATDM9p2v4iXM4u0xWjsD9f7giQyycv/wNF144XMcFr0riwaw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=tGY/rnrmZfeVYSbH5qfssA9Kvmv/F7hqQZT70lekDqU=; b=ZxnYTIIXMnQL95rPJ+UpzQUFJXN8ZyrtcZOXFY1YlfzcuO+ejge54dwpDZu6KARzjGuH7qA6mvRLHZzgvswiw5rxQCWeSFb9cEDnoIZp38cyi7CWAcB02Lhab6qgiYHS9QQqqWVGUyeahTy4T3r9O1Xt/L91rpCGJknbIz/JXBJaQ4ZcmcJk+srDaka8F6TcT++nVGKyIJ+AeOF5C45FwZW6tL7qdYxLSIAaVnFwkWkVX+50GpybG8sx07YpdCykeX/2aXBwHvHaRLJi8MOY8pl23DPE39ghZp5TiPqrPxAk4cHLPAbf9wROswwe+NwkppwcBKXs5ifAkmLUHPtW2g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.238) smtp.rcpttodomain=vger.kernel.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=tGY/rnrmZfeVYSbH5qfssA9Kvmv/F7hqQZT70lekDqU=; b=aFzYfNH6P+7WOPkAW3h6IGM7U7Vg/8zHy6N+U75kS1zJflKZ4DOG/QXZ1oxnqpOvkji9R0qzl68tao/79YDobOXQnVvbPNaV+cm9Mga/SiJJFNaOmZeD0fNjH0qSyQpDqkzW2kFx8s1JyUDtVtu8xSTh3ty7w/ueIneKJXkd5Amr7t0jhkEA3l/NXKPrOJq04Nmu5O1lhvsZfWOz1AnxQnuFL0/x9wdnzE2iEx3lIDSA6Ziowis385is8sS4xvWLfKFL/SXVuVV/dacpNxs7wRw85wJGSQhXZ/Ar0lyJbdtuUsmjSLzA/SYTYxl3j1Gm5nWmiAleBanPiGFgz04TVQ== Received: from BN9PR03CA0721.namprd03.prod.outlook.com (2603:10b6:408:110::6) by CY4PR1201MB0262.namprd12.prod.outlook.com (2603:10b6:910:23::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5273.13; Wed, 18 May 2022 13:38:19 +0000 Received: from BN8NAM11FT046.eop-nam11.prod.protection.outlook.com (2603:10b6:408:110:cafe::5d) by BN9PR03CA0721.outlook.office365.com (2603:10b6:408:110::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5273.14 via Frontend Transport; Wed, 18 May 2022 13:38:18 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.238) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.238 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.238; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.238) by BN8NAM11FT046.mail.protection.outlook.com (10.13.177.127) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5273.14 via Frontend Transport; Wed, 18 May 2022 13:38:18 +0000 Received: from rnnvmail205.nvidia.com (10.129.68.10) by DRHQMAIL105.nvidia.com (10.27.9.14) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Wed, 18 May 2022 13:38:18 +0000 Received: from rnnvmail204.nvidia.com (10.129.68.6) by rnnvmail205.nvidia.com (10.129.68.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Wed, 18 May 2022 06:38:17 -0700 Received: from vdi.nvidia.com (10.127.8.13) by mail.nvidia.com (10.129.68.6) with Microsoft SMTP Server id 15.2.986.22 via Frontend Transport; Wed, 18 May 2022 06:38:15 -0700 From: Eli Cohen To: , CC: , , , Eli Cohen Subject: [PATCH v8 4/6] net/vdpa: Use readers/writers semaphore instead of cf_mutex Date: Wed, 18 May 2022 16:38:02 +0300 Message-ID: <20220518133804.1075129-5-elic@nvidia.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220518133804.1075129-1-elic@nvidia.com> References: <20220518133804.1075129-1-elic@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 318cd30e-e4f8-4065-4725-08da38d3ab18 X-MS-TrafficTypeDiagnostic: CY4PR1201MB0262:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 2HFTWPqG66YT5QDn8erKj5av+BQM1r1PLP4pxolciQGbAdompgNLbnn6BEYtQLh+eTBqZchgZCVsKBJEdU2w/KckyqrYponUfgdeWL/uz5Gc3bDKo4WXWAmVZclJ2OsVIwJ/o2I+FlVjiNfJSNhqiI8ZslHUN4nWjzsynHEYCFDdXNe1Einy3lzI+BxL/Ee1SOndRlNJeXxjIw2QxvPvwGEgNz4IsIlYdNWsXOuLeUaJ/K519DxnhWAo8kHQHSeftaPba0mXvxtbQbIkh7487B5fNrcx1ydLLkR2XkERqwWJpkhiff9oNpKPzuX8DqIUqhd9ar0OzJh4czbbAV75KDo35G2rHoTwerjJX8EhnVmdbUgdoSXjXQ0fMfoQS/5/gCASPmMxESX0MntQN5AAqbxTEuQHoQ/B05ryRVEmuPz4JW5nSDsm/pKC52ZKe569YL/cw9NwE7VP36ptIXmUtQGhI2RIELJ98QGZfuDc2tKkA7i+FAWKK27iSK0/vth6oVUx50bz5NiQj0f/9CaeqgXq8UJra/58oF0IPLKszxKxu7BiyhiJTSAqxh2nmwrSYfRVmOwyuqEMH1XcHVFvoCa1+kWUgZNtJI0q8N7hLFItryY9lS8NhKEYzK4RX8l4TMnOozQQ7di/1kjGYWQMjfvvL7pM2VXkOmdaUuORroUATllYowTSLzY+qc9vjEnrn32MnI0hW05br007H6R4dw== X-Forefront-Antispam-Report: CIP:12.22.5.238;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:InfoNoRecords;CAT:NONE;SFS:(13230001)(4636009)(46966006)(36840700001)(40470700004)(2906002)(508600001)(110136005)(54906003)(316002)(81166007)(7696005)(70206006)(86362001)(356005)(8676002)(4326008)(40460700003)(6666004)(5660300002)(186003)(336012)(26005)(36860700001)(426003)(47076005)(107886003)(82310400005)(1076003)(8936002)(2616005)(83380400001)(36756003)(70586007)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2022 13:38:18.4377 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 318cd30e-e4f8-4065-4725-08da38d3ab18 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[12.22.5.238];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT046.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR1201MB0262 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Replace cf_mutex with rw_semaphore to reflect the fact that some calls could be called concurrently but can suffice with read lock. Suggested-by: Si-Wei Liu Signed-off-by: Eli Cohen --- drivers/vdpa/vdpa.c | 25 ++++++++++++------------- include/linux/vdpa.h | 12 ++++++------ 2 files changed, 18 insertions(+), 19 deletions(-) diff --git a/drivers/vdpa/vdpa.c b/drivers/vdpa/vdpa.c index 2ff7de5e6b2f..9d3534a0bc5f 100644 --- a/drivers/vdpa/vdpa.c +++ b/drivers/vdpa/vdpa.c @@ -23,9 +23,9 @@ static DEFINE_IDA(vdpa_index_ida); =20 void vdpa_set_status(struct vdpa_device *vdev, u8 status) { - mutex_lock(&vdev->cf_mutex); + down_write(&vdev->cf_lock); vdev->config->set_status(vdev, status); - mutex_unlock(&vdev->cf_mutex); + up_write(&vdev->cf_lock); } EXPORT_SYMBOL(vdpa_set_status); =20 @@ -148,7 +148,6 @@ static void vdpa_release_dev(struct device *d) ops->free(vdev); =20 ida_simple_remove(&vdpa_index_ida, vdev->index); - mutex_destroy(&vdev->cf_mutex); kfree(vdev->driver_override); kfree(vdev); } @@ -211,7 +210,7 @@ struct vdpa_device *__vdpa_alloc_device(struct device *= parent, if (err) goto err_name; =20 - mutex_init(&vdev->cf_mutex); + init_rwsem(&vdev->cf_lock); device_initialize(&vdev->dev); =20 return vdev; @@ -407,9 +406,9 @@ static void vdpa_get_config_unlocked(struct vdpa_device= *vdev, void vdpa_get_config(struct vdpa_device *vdev, unsigned int offset, void *buf, unsigned int len) { - mutex_lock(&vdev->cf_mutex); + down_read(&vdev->cf_lock); vdpa_get_config_unlocked(vdev, offset, buf, len); - mutex_unlock(&vdev->cf_mutex); + up_read(&vdev->cf_lock); } EXPORT_SYMBOL_GPL(vdpa_get_config); =20 @@ -423,9 +422,9 @@ EXPORT_SYMBOL_GPL(vdpa_get_config); void vdpa_set_config(struct vdpa_device *vdev, unsigned int offset, const void *buf, unsigned int length) { - mutex_lock(&vdev->cf_mutex); + down_write(&vdev->cf_lock); vdev->config->set_config(vdev, offset, buf, length); - mutex_unlock(&vdev->cf_mutex); + up_write(&vdev->cf_lock); } EXPORT_SYMBOL_GPL(vdpa_set_config); =20 @@ -866,7 +865,7 @@ vdpa_dev_config_fill(struct vdpa_device *vdev, struct s= k_buff *msg, u32 portid, u8 status; int err; =20 - mutex_lock(&vdev->cf_mutex); + down_read(&vdev->cf_lock); status =3D vdev->config->get_status(vdev); if (!(status & VIRTIO_CONFIG_S_FEATURES_OK)) { NL_SET_ERR_MSG_MOD(extack, "Features negotiation not completed"); @@ -903,14 +902,14 @@ vdpa_dev_config_fill(struct vdpa_device *vdev, struct= sk_buff *msg, u32 portid, if (err) goto msg_err; =20 - mutex_unlock(&vdev->cf_mutex); + up_read(&vdev->cf_lock); genlmsg_end(msg, hdr); return 0; =20 msg_err: genlmsg_cancel(msg, hdr); out: - mutex_unlock(&vdev->cf_mutex); + up_read(&vdev->cf_lock); return err; } =20 @@ -954,7 +953,7 @@ static int vendor_stats_fill(struct vdpa_device *vdev, = struct sk_buff *msg, { int err; =20 - mutex_lock(&vdev->cf_mutex); + down_read(&vdev->cf_lock); if (!vdev->config->get_vendor_vq_stats) { err =3D -EOPNOTSUPP; goto out; @@ -962,7 +961,7 @@ static int vendor_stats_fill(struct vdpa_device *vdev, = struct sk_buff *msg, =20 err =3D vdpa_fill_stats_rec(vdev, msg, info, index); out: - mutex_unlock(&vdev->cf_mutex); + up_read(&vdev->cf_lock); return err; } =20 diff --git a/include/linux/vdpa.h b/include/linux/vdpa.h index 2ae8443331e1..2cb14847831e 100644 --- a/include/linux/vdpa.h +++ b/include/linux/vdpa.h @@ -66,7 +66,7 @@ struct vdpa_mgmt_dev; * @dma_dev: the actual device that is performing DMA * @driver_override: driver name to force a match * @config: the configuration ops for this device. - * @cf_mutex: Protects get and set access to configuration layout. + * @cf_lock: Protects get and set access to configuration layout. * @index: device index * @features_valid: were features initialized? for legacy guests * @use_va: indicate whether virtual address must be used by this device @@ -79,7 +79,7 @@ struct vdpa_device { struct device *dma_dev; const char *driver_override; const struct vdpa_config_ops *config; - struct mutex cf_mutex; /* Protects get/set config */ + struct rw_semaphore cf_lock; /* Protects get/set config */ unsigned int index; bool features_valid; bool use_va; @@ -398,10 +398,10 @@ static inline int vdpa_reset(struct vdpa_device *vdev) const struct vdpa_config_ops *ops =3D vdev->config; int ret; =20 - mutex_lock(&vdev->cf_mutex); + down_write(&vdev->cf_lock); vdev->features_valid =3D false; ret =3D ops->reset(vdev); - mutex_unlock(&vdev->cf_mutex); + up_write(&vdev->cf_lock); return ret; } =20 @@ -420,9 +420,9 @@ static inline int vdpa_set_features(struct vdpa_device = *vdev, u64 features) { int ret; =20 - mutex_lock(&vdev->cf_mutex); + down_write(&vdev->cf_lock); ret =3D vdpa_set_features_unlocked(vdev, features); - mutex_unlock(&vdev->cf_mutex); + up_write(&vdev->cf_lock); =20 return ret; } --=20 2.35.1 From nobody Thu May 7 20:49:47 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CA577C433F5 for ; Wed, 18 May 2022 13:38:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238094AbiERNir (ORCPT ); Wed, 18 May 2022 09:38:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46082 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236417AbiERNiY (ORCPT ); Wed, 18 May 2022 09:38:24 -0400 Received: from NAM04-MW2-obe.outbound.protection.outlook.com (mail-mw2nam08on2058.outbound.protection.outlook.com [40.107.101.58]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A6DC7170666 for ; Wed, 18 May 2022 06:38:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=cS0VxYoEPFaPjxc8bYVz44IveL01A0S5mgJ/tSZtR+Pj5j2ZRysy0Y/dhJegLupB95r76RJvZ9uE7z8SCitWnUbh7RdmENq5GkB1Pzt+xkF56bWcq5pWqut3Ek9jeUeuUs5rEuLicULGjfaP5vLfbtyfFy8q7yu1NCuFacYu/hfofqOS/6To8v+nknFEeuK5Rb8tvXhRK1INr85Vc9SFsVblCXrCMM/ta7YUFwLaxsFNSCoDqfl2GGmO94bIENRT/iX1pp43EeCEAmcWmraNR+xG2S18T8nE30b0scam7RhBMjSB953h3/aQ1+HBlxVA3Hgzz+nDuOIuhyumfTN3qQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=DItOkeNP3L8G1rp1N7chn23N4FgDIkEm7EoUFb+2nPs=; b=bjReenxjEMzXknyUos1fUoqcGJwP+5MwgfPA4/vunBxgeGskcWXrrxOjjOlbExMM/d5ff5pQJXrTNbVLzyVm53wzhKsuo6hLU5xY1GmAYqhB/OcQr98Rg/0EgWzcRHG7vAzXzNgJPTkWa3BALvz6DXlk1We8Vn/7WTsI8TWOO9bbHnflRTU4cM0XeNA8ZPlVKuCSrt4zmuxXsY8hpbWzOWbslVOP/k/RK+DDjQIwdzavTFHF2GILpVnuVCjLAOJnbGp+NDRtn03Z4fJEHMvadG3oxG/YC8ZTeO8lORAaY9ZSahiEnwl6igJBz2y8IjniTD42pUMoTlhkaAh0KmqcQg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.238) smtp.rcpttodomain=vger.kernel.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=DItOkeNP3L8G1rp1N7chn23N4FgDIkEm7EoUFb+2nPs=; b=cEm3uyjB7+knxQae35pNmMsNXGDA/NC4RF4vo7E4nOk0v/LeTCjr9IyhxUpeZ8xjV2rJVIgMmHncbQyTHb32SCcTw17a1WSkBfJkik7o9Zdm4yFY26oM/e1P3QVOAMAslYkXhiXvd8dqgIhYphbzozSXR+g31NQQauW/Yhg6dvaOFI/p/fPD058bdudiOgUzTSsmbudUXUend4WXeIwOXzRBLq1ybZH6araTG4dbsWuOtjPFOvJFJBNgC/Ov3fuw81Epc5DP12Krb1XqsXrNEJj///ysqawYuzYgrLLpqyuLvKDw6l1L8vOW8UhlsZNd0BPk7UAjUTTLrEN7qWs7Uw== Received: from BN9PR03CA0444.namprd03.prod.outlook.com (2603:10b6:408:113::29) by DM6PR12MB4529.namprd12.prod.outlook.com (2603:10b6:5:2ab::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5273.13; Wed, 18 May 2022 13:38:21 +0000 Received: from BN8NAM11FT011.eop-nam11.prod.protection.outlook.com (2603:10b6:408:113:cafe::ed) by BN9PR03CA0444.outlook.office365.com (2603:10b6:408:113::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5273.14 via Frontend Transport; Wed, 18 May 2022 13:38:21 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.238) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.238 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.238; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.238) by BN8NAM11FT011.mail.protection.outlook.com (10.13.176.140) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5273.14 via Frontend Transport; Wed, 18 May 2022 13:38:20 +0000 Received: from rnnvmail204.nvidia.com (10.129.68.6) by DRHQMAIL105.nvidia.com (10.27.9.14) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Wed, 18 May 2022 13:38:20 +0000 Received: from rnnvmail204.nvidia.com (10.129.68.6) by rnnvmail204.nvidia.com (10.129.68.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Wed, 18 May 2022 06:38:19 -0700 Received: from vdi.nvidia.com (10.127.8.13) by mail.nvidia.com (10.129.68.6) with Microsoft SMTP Server id 15.2.986.22 via Frontend Transport; Wed, 18 May 2022 06:38:17 -0700 From: Eli Cohen To: , CC: , , , Eli Cohen Subject: [PATCH v8 5/6] vdpa/mlx5: Add support for reading descriptor statistics Date: Wed, 18 May 2022 16:38:03 +0300 Message-ID: <20220518133804.1075129-6-elic@nvidia.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220518133804.1075129-1-elic@nvidia.com> References: <20220518133804.1075129-1-elic@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: dd581a8e-b603-41f9-cd66-08da38d3ac92 X-MS-TrafficTypeDiagnostic: DM6PR12MB4529:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: haMfXm3cNRzqadcsjxJT9blcYCUsjlzTBbWeXgQv7ayCN5oEka5ZBt73zj4kOevtWdhaKFDk51H40edSOLkDNb3/vuSY7QR92gLhainouLoDv2nRHX79pL3wwY7bRJN5J9XCq/UBVFRPKNqfKOKqAH199OTx3S501Qup588w5HSa0JfMC1I/62Ia8MS2KeHueO3U+PDf0Fnv803ufzK/GrVvAi1ytPRTi3QN9pdGdbdGrZy5OFk3BG3ruypgsUWTbFXioShClDrdnWyc0o6dpVM5jQE3BCBnRMvzpaLkCB+mr/O08eIzU+09PDSz2qXP9QHi0bEBHITlhgRE7dlndACRHk4WcefD7h401N0NIZEOZNv6Z3RFS2q5zIPCCg9zVG8B5mVqmqFeW/2nag8MHLYqvYK13Ct7DS94kZXj1GvrR/Mu8jpi+pAjHLRrtcT5/pOdMUlNYGzG/G4/aY8npkrP7bExRcyx69ooJTsUcDZTvC9r3iJ2ztsDYuDNleB030ba+EPdf13TvCIqNoPiAvREYlwFGyvYmRSVORv9KIumAGYDaihv/9YlT7ha6CvI58L8zr7ujXjr5iStrTy8VC/541yISUuN7UUIsEts6iokzycU4bjRCNl/J6BgMSTVLVnCbP1mzPfZAiAmFeyBr6kEweqN132yfge3h4L4SmzvdGO6UfjDgKzwLL9TZHhIA2YNRNqiqqzxJvEFRbOTaQ== X-Forefront-Antispam-Report: CIP:12.22.5.238;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:InfoNoRecords;CAT:NONE;SFS:(13230001)(4636009)(46966006)(40470700004)(36840700001)(40460700003)(81166007)(47076005)(1076003)(2616005)(36860700001)(107886003)(7696005)(508600001)(336012)(36756003)(2906002)(54906003)(82310400005)(6666004)(426003)(110136005)(316002)(83380400001)(26005)(86362001)(30864003)(70206006)(5660300002)(70586007)(8936002)(356005)(186003)(4326008)(8676002)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2022 13:38:20.9184 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: dd581a8e-b603-41f9-cd66-08da38d3ac92 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[12.22.5.238];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT011.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4529 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Implement the get_vq_stats calback of vdpa_config_ops to return the statistics for a virtqueue. The statistics are provided as vendor specific statistics where the driver provides a pair of attribute name and attribute value. Currently supported are received descriptors and completed descriptors. Signed-off-by: Eli Cohen --- drivers/vdpa/mlx5/core/mlx5_vdpa.h | 2 + drivers/vdpa/mlx5/net/mlx5_vnet.c | 149 +++++++++++++++++++++++++++++ include/linux/mlx5/mlx5_ifc.h | 1 + include/linux/mlx5/mlx5_ifc_vdpa.h | 39 ++++++++ 4 files changed, 191 insertions(+) diff --git a/drivers/vdpa/mlx5/core/mlx5_vdpa.h b/drivers/vdpa/mlx5/core/ml= x5_vdpa.h index daaf7b503677..44104093163b 100644 --- a/drivers/vdpa/mlx5/core/mlx5_vdpa.h +++ b/drivers/vdpa/mlx5/core/mlx5_vdpa.h @@ -61,6 +61,8 @@ struct mlx5_control_vq { struct vringh_kiov riov; struct vringh_kiov wiov; unsigned short head; + unsigned int received_desc; + unsigned int completed_desc; }; =20 struct mlx5_vdpa_wq_ent { diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5= _vnet.c index e0de44000d92..2b815ef850c8 100644 --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c @@ -119,6 +119,7 @@ struct mlx5_vdpa_virtqueue { struct mlx5_vdpa_umem umem2; struct mlx5_vdpa_umem umem3; =20 + u32 counter_set_id; bool initialized; int index; u32 virtq_id; @@ -818,6 +819,12 @@ static u16 get_features_12_3(u64 features) (!!(features & BIT_ULL(VIRTIO_NET_F_GUEST_CSUM)) << 6); } =20 +static bool counters_supported(const struct mlx5_vdpa_dev *mvdev) +{ + return MLX5_CAP_GEN_64(mvdev->mdev, general_obj_types) & + BIT_ULL(MLX5_OBJ_TYPE_VIRTIO_Q_COUNTERS); +} + static int create_virtqueue(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_v= irtqueue *mvq) { int inlen =3D MLX5_ST_SZ_BYTES(create_virtio_net_q_in); @@ -872,6 +879,8 @@ static int create_virtqueue(struct mlx5_vdpa_net *ndev,= struct mlx5_vdpa_virtque MLX5_SET(virtio_q, vq_ctx, umem_3_id, mvq->umem3.id); MLX5_SET(virtio_q, vq_ctx, umem_3_size, mvq->umem3.size); MLX5_SET(virtio_q, vq_ctx, pd, ndev->mvdev.res.pdn); + if (counters_supported(&ndev->mvdev)) + MLX5_SET(virtio_q, vq_ctx, counter_set_id, mvq->counter_set_id); =20 err =3D mlx5_cmd_exec(ndev->mvdev.mdev, in, inlen, out, sizeof(out)); if (err) @@ -1135,6 +1144,47 @@ static int modify_virtqueue(struct mlx5_vdpa_net *nd= ev, struct mlx5_vdpa_virtque return err; } =20 +static int counter_set_alloc(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_= virtqueue *mvq) +{ + u32 in[MLX5_ST_SZ_DW(create_virtio_q_counters_in)] =3D {}; + u32 out[MLX5_ST_SZ_DW(create_virtio_q_counters_out)] =3D {}; + void *cmd_hdr; + int err; + + if (!counters_supported(&ndev->mvdev)) + return 0; + + cmd_hdr =3D MLX5_ADDR_OF(create_virtio_q_counters_in, in, hdr); + + MLX5_SET(general_obj_in_cmd_hdr, cmd_hdr, opcode, MLX5_CMD_OP_CREATE_GENE= RAL_OBJECT); + MLX5_SET(general_obj_in_cmd_hdr, cmd_hdr, obj_type, MLX5_OBJ_TYPE_VIRTIO_= Q_COUNTERS); + MLX5_SET(general_obj_in_cmd_hdr, cmd_hdr, uid, ndev->mvdev.res.uid); + + err =3D mlx5_cmd_exec(ndev->mvdev.mdev, in, sizeof(in), out, sizeof(out)); + if (err) + return err; + + mvq->counter_set_id =3D MLX5_GET(general_obj_out_cmd_hdr, out, obj_id); + + return 0; +} + +static void counter_set_dealloc(struct mlx5_vdpa_net *ndev, struct mlx5_vd= pa_virtqueue *mvq) +{ + u32 in[MLX5_ST_SZ_DW(destroy_virtio_q_counters_in)] =3D {}; + u32 out[MLX5_ST_SZ_DW(destroy_virtio_q_counters_out)] =3D {}; + + if (!counters_supported(&ndev->mvdev)) + return; + + MLX5_SET(destroy_virtio_q_counters_in, in, hdr.opcode, MLX5_CMD_OP_DESTRO= Y_GENERAL_OBJECT); + MLX5_SET(destroy_virtio_q_counters_in, in, hdr.obj_id, mvq->counter_set_i= d); + MLX5_SET(destroy_virtio_q_counters_in, in, hdr.uid, ndev->mvdev.res.uid); + MLX5_SET(destroy_virtio_q_counters_in, in, hdr.obj_type, MLX5_OBJ_TYPE_VI= RTIO_Q_COUNTERS); + if (mlx5_cmd_exec(ndev->mvdev.mdev, in, sizeof(in), out, sizeof(out))) + mlx5_vdpa_warn(&ndev->mvdev, "dealloc counter set 0x%x\n", mvq->counter_= set_id); +} + static int setup_vq(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtqueue= *mvq) { u16 idx =3D mvq->index; @@ -1162,6 +1212,10 @@ static int setup_vq(struct mlx5_vdpa_net *ndev, stru= ct mlx5_vdpa_virtqueue *mvq) if (err) goto err_connect; =20 + err =3D counter_set_alloc(ndev, mvq); + if (err) + goto err_counter; + err =3D create_virtqueue(ndev, mvq); if (err) goto err_connect; @@ -1179,6 +1233,8 @@ static int setup_vq(struct mlx5_vdpa_net *ndev, struc= t mlx5_vdpa_virtqueue *mvq) return 0; =20 err_connect: + counter_set_dealloc(ndev, mvq); +err_counter: qp_destroy(ndev, &mvq->vqqp); err_vqqp: qp_destroy(ndev, &mvq->fwqp); @@ -1223,6 +1279,7 @@ static void teardown_vq(struct mlx5_vdpa_net *ndev, s= truct mlx5_vdpa_virtqueue * =20 suspend_vq(ndev, mvq); destroy_virtqueue(ndev, mvq); + counter_set_dealloc(ndev, mvq); qp_destroy(ndev, &mvq->vqqp); qp_destroy(ndev, &mvq->fwqp); cq_destroy(ndev, mvq->index); @@ -1659,6 +1716,7 @@ static void mlx5_cvq_kick_handler(struct work_struct = *work) if (read !=3D sizeof(ctrl)) break; =20 + cvq->received_desc++; switch (ctrl.class) { case VIRTIO_NET_CTRL_MAC: status =3D handle_ctrl_mac(mvdev, ctrl.cmd); @@ -1682,6 +1740,7 @@ static void mlx5_cvq_kick_handler(struct work_struct = *work) if (vringh_need_notify_iotlb(&cvq->vring)) vringh_notify(&cvq->vring); =20 + cvq->completed_desc++; queue_work(mvdev->wq, &wqent->work); break; } @@ -2303,6 +2362,8 @@ static int mlx5_vdpa_reset(struct vdpa_device *vdev) mlx5_vdpa_destroy_mr(&ndev->mvdev); ndev->mvdev.status =3D 0; ndev->cur_num_vqs =3D 0; + ndev->mvdev.cvq.received_desc =3D 0; + ndev->mvdev.cvq.completed_desc =3D 0; memset(ndev->event_cbs, 0, sizeof(*ndev->event_cbs) * (mvdev->max_vqs + 1= )); ndev->mvdev.actual_features =3D 0; ++mvdev->generation; @@ -2422,6 +2483,93 @@ static u64 mlx5_vdpa_get_driver_features(struct vdpa= _device *vdev) return mvdev->actual_features; } =20 +static int counter_set_query(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_= virtqueue *mvq, + u64 *received_desc, u64 *completed_desc) +{ + u32 in[MLX5_ST_SZ_DW(query_virtio_q_counters_in)] =3D {}; + u32 out[MLX5_ST_SZ_DW(query_virtio_q_counters_out)] =3D {}; + void *cmd_hdr; + void *ctx; + int err; + + if (!counters_supported(&ndev->mvdev)) + return -EOPNOTSUPP; + + if (mvq->fw_state !=3D MLX5_VIRTIO_NET_Q_OBJECT_STATE_RDY) + return -EAGAIN; + + cmd_hdr =3D MLX5_ADDR_OF(query_virtio_q_counters_in, in, hdr); + + MLX5_SET(general_obj_in_cmd_hdr, cmd_hdr, opcode, MLX5_CMD_OP_QUERY_GENER= AL_OBJECT); + MLX5_SET(general_obj_in_cmd_hdr, cmd_hdr, obj_type, MLX5_OBJ_TYPE_VIRTIO_= Q_COUNTERS); + MLX5_SET(general_obj_in_cmd_hdr, cmd_hdr, uid, ndev->mvdev.res.uid); + MLX5_SET(general_obj_in_cmd_hdr, cmd_hdr, obj_id, mvq->counter_set_id); + + err =3D mlx5_cmd_exec(ndev->mvdev.mdev, in, sizeof(in), out, sizeof(out)); + if (err) + return err; + + ctx =3D MLX5_ADDR_OF(query_virtio_q_counters_out, out, counters); + *received_desc =3D MLX5_GET64(virtio_q_counters, ctx, received_desc); + *completed_desc =3D MLX5_GET64(virtio_q_counters, ctx, completed_desc); + return 0; +} + +static int mlx5_vdpa_get_vendor_vq_stats(struct vdpa_device *vdev, u16 idx, + struct sk_buff *msg, + struct netlink_ext_ack *extack) +{ + struct mlx5_vdpa_dev *mvdev =3D to_mvdev(vdev); + struct mlx5_vdpa_net *ndev =3D to_mlx5_vdpa_ndev(mvdev); + struct mlx5_vdpa_virtqueue *mvq; + struct mlx5_control_vq *cvq; + u64 received_desc; + u64 completed_desc; + int err =3D 0; + + mutex_lock(&ndev->reslock); + if (!is_index_valid(mvdev, idx)) { + NL_SET_ERR_MSG_MOD(extack, "virtqueue index is not valid"); + err =3D -EINVAL; + goto out_err; + } + + if (idx =3D=3D ctrl_vq_idx(mvdev)) { + cvq =3D &mvdev->cvq; + received_desc =3D cvq->received_desc; + completed_desc =3D cvq->completed_desc; + goto out; + } + + mvq =3D &ndev->vqs[idx]; + err =3D counter_set_query(ndev, mvq, &received_desc, &completed_desc); + if (err) { + NL_SET_ERR_MSG_MOD(extack, "failed to query hardware"); + goto out_err; + } + +out: + err =3D -EMSGSIZE; + if (nla_put_string(msg, VDPA_ATTR_DEV_VENDOR_ATTR_NAME, "received_desc")) + goto out_err; + + if (nla_put_u64_64bit(msg, VDPA_ATTR_DEV_VENDOR_ATTR_VALUE, received_desc, + VDPA_ATTR_PAD)) + goto out_err; + + if (nla_put_string(msg, VDPA_ATTR_DEV_VENDOR_ATTR_NAME, "completed_desc")) + goto out_err; + + if (nla_put_u64_64bit(msg, VDPA_ATTR_DEV_VENDOR_ATTR_VALUE, completed_des= c, + VDPA_ATTR_PAD)) + goto out_err; + + err =3D 0; +out_err: + mutex_unlock(&ndev->reslock); + return err; +} + static const struct vdpa_config_ops mlx5_vdpa_ops =3D { .set_vq_address =3D mlx5_vdpa_set_vq_address, .set_vq_num =3D mlx5_vdpa_set_vq_num, @@ -2431,6 +2579,7 @@ static const struct vdpa_config_ops mlx5_vdpa_ops =3D= { .get_vq_ready =3D mlx5_vdpa_get_vq_ready, .set_vq_state =3D mlx5_vdpa_set_vq_state, .get_vq_state =3D mlx5_vdpa_get_vq_state, + .get_vendor_vq_stats =3D mlx5_vdpa_get_vendor_vq_stats, .get_vq_notification =3D mlx5_get_vq_notification, .get_vq_irq =3D mlx5_get_vq_irq, .get_vq_align =3D mlx5_vdpa_get_vq_align, diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h index 49a48d7709ac..1d193d9b6029 100644 --- a/include/linux/mlx5/mlx5_ifc.h +++ b/include/linux/mlx5/mlx5_ifc.h @@ -94,6 +94,7 @@ enum { enum { MLX5_OBJ_TYPE_GENEVE_TLV_OPT =3D 0x000b, MLX5_OBJ_TYPE_VIRTIO_NET_Q =3D 0x000d, + MLX5_OBJ_TYPE_VIRTIO_Q_COUNTERS =3D 0x001c, MLX5_OBJ_TYPE_MATCH_DEFINER =3D 0x0018, MLX5_OBJ_TYPE_MKEY =3D 0xff01, MLX5_OBJ_TYPE_QP =3D 0xff02, diff --git a/include/linux/mlx5/mlx5_ifc_vdpa.h b/include/linux/mlx5/mlx5_i= fc_vdpa.h index 1a9c9d94cb59..4414ed5b6ed2 100644 --- a/include/linux/mlx5/mlx5_ifc_vdpa.h +++ b/include/linux/mlx5/mlx5_ifc_vdpa.h @@ -165,4 +165,43 @@ struct mlx5_ifc_modify_virtio_net_q_out_bits { struct mlx5_ifc_general_obj_out_cmd_hdr_bits general_obj_out_cmd_hdr; }; =20 +struct mlx5_ifc_virtio_q_counters_bits { + u8 modify_field_select[0x40]; + u8 reserved_at_40[0x40]; + u8 received_desc[0x40]; + u8 completed_desc[0x40]; + u8 error_cqes[0x20]; + u8 bad_desc_errors[0x20]; + u8 exceed_max_chain[0x20]; + u8 invalid_buffer[0x20]; + u8 reserved_at_180[0x280]; +}; + +struct mlx5_ifc_create_virtio_q_counters_in_bits { + struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr; + struct mlx5_ifc_virtio_q_counters_bits virtio_q_counters; +}; + +struct mlx5_ifc_create_virtio_q_counters_out_bits { + struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr; + struct mlx5_ifc_virtio_q_counters_bits virtio_q_counters; +}; + +struct mlx5_ifc_destroy_virtio_q_counters_in_bits { + struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr; +}; + +struct mlx5_ifc_destroy_virtio_q_counters_out_bits { + struct mlx5_ifc_general_obj_out_cmd_hdr_bits hdr; +}; + +struct mlx5_ifc_query_virtio_q_counters_in_bits { + struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr; +}; + +struct mlx5_ifc_query_virtio_q_counters_out_bits { + struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr; + struct mlx5_ifc_virtio_q_counters_bits counters; +}; + #endif /* __MLX5_IFC_VDPA_H_ */ --=20 2.35.1 From nobody Thu May 7 20:49:47 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6B812C433F5 for ; Wed, 18 May 2022 13:38:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238041AbiERNix (ORCPT ); Wed, 18 May 2022 09:38:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46312 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238048AbiERNi1 (ORCPT ); Wed, 18 May 2022 09:38:27 -0400 Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2044.outbound.protection.outlook.com [40.107.223.44]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 219E2170F21 for ; Wed, 18 May 2022 06:38:24 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=TJs5fP3enU1aZwrhLcXL07znuQ16CPCSnhNLQwhO9AIpjp9n9McrjaU9qzQppu+MTyovPiriniyewnpE6uwKonkKYSaFPrXgXu4Xv1FN+tYwCTlJegpWoHyuLr2kenhugqmWPCJhrxzfecsjVp7HD2wwsRYSn5J6ufW3Y9mUlL9g9Vfs952BTe9ZnpS0hYKQJJOXCBwdHeX0BuLBe2dEmbLVLpg2BqzmAGj3ncAPPySs9Gk5yQVawEEodIoQa/h3QTAyGFtt+xlY0ygP7QuXKuzhEvWpB7KS01MgUclvR3WfIEybAloVBt9v0sxzACaMCSWcIrzjD1I0mFSGmrRd/w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=6Pi6WnOyiPFRfWQVGpvgB/982+kHi+Dgas0akDbhZKw=; b=VDJMiNUwqhz/qUOmSb02/ZUrcN8UUndwkHnYlKlj2bj194JT7y2M1TgvaU5ETXpAJoHimAtNTsg2rjiWyVj6HxLSE86NLX8lWyyziICHe/EUFm91D5ey0Vfh6sD+F1B9gg1aeI92bP2LVAd2wHzdmN716KjyK5h7IHcWf3VWoLLBa712HsAR4ZMcHYTkMV+fe1UxHRtIBBa2fhV6mF+7DeR5g6x1MiP9YKfrXybt3iXQOlGHXs0E7jq9emN7BQ3cG+R0B3UsrR8ds06enpeUDdT5VTURWEFB4HKdAOav/3K36sn6FNXpwTvufmVQ7o3MCaYDCHevYfCqq1k1jQCbYg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.234) smtp.rcpttodomain=vger.kernel.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=6Pi6WnOyiPFRfWQVGpvgB/982+kHi+Dgas0akDbhZKw=; b=EfLmS6nt80oScDLNI0AyK/BEBtAoR8Iy0gVH5hm1HuBblfNDKo0TxQ04uvXehMuKBurDyPuQnnyrV9dQsblGYnmlodNEUexv1SLKlILXHagS8XOhqBpLL5PLlvnNITRZu0tqFCkpidHa1EU2r9rKx5wG6lTGR9gde+oG5bx8jfIZwbX8heVHpWETO/lNqBHJWeI32giBIkunFPCskeLNrI7WM5m9uh8LxhJIRD/v6XAY6GHK5tn82/IY1B/y+OwMRG0lqUQ+kgutJpxv+dEqSnLp1nN4N+A5K3/h0PtEjE0WuCRjHvOTMYfpTnYyIK9fcO1layrhAn2Cy4x1XTH1zg== Received: from MW4PR04CA0121.namprd04.prod.outlook.com (2603:10b6:303:84::6) by DM6PR12MB4896.namprd12.prod.outlook.com (2603:10b6:5:1b6::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5250.13; Wed, 18 May 2022 13:38:23 +0000 Received: from CO1NAM11FT026.eop-nam11.prod.protection.outlook.com (2603:10b6:303:84:cafe::dc) by MW4PR04CA0121.outlook.office365.com (2603:10b6:303:84::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5273.15 via Frontend Transport; Wed, 18 May 2022 13:38:23 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.234) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.234 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.234; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.234) by CO1NAM11FT026.mail.protection.outlook.com (10.13.175.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5273.14 via Frontend Transport; Wed, 18 May 2022 13:38:22 +0000 Received: from rnnvmail204.nvidia.com (10.129.68.6) by DRHQMAIL101.nvidia.com (10.27.9.10) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Wed, 18 May 2022 13:38:22 +0000 Received: from rnnvmail204.nvidia.com (10.129.68.6) by rnnvmail204.nvidia.com (10.129.68.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Wed, 18 May 2022 06:38:21 -0700 Received: from vdi.nvidia.com (10.127.8.13) by mail.nvidia.com (10.129.68.6) with Microsoft SMTP Server id 15.2.986.22 via Frontend Transport; Wed, 18 May 2022 06:38:20 -0700 From: Eli Cohen To: , CC: , , , Eli Cohen Subject: [PATCH v8 6/6] vdpa/mlx5: Use readers/writers semaphore instead of mutex Date: Wed, 18 May 2022 16:38:04 +0300 Message-ID: <20220518133804.1075129-7-elic@nvidia.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220518133804.1075129-1-elic@nvidia.com> References: <20220518133804.1075129-1-elic@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: ce849420-c3c1-42ca-45c4-08da38d3adc0 X-MS-TrafficTypeDiagnostic: DM6PR12MB4896:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: Xvt7FxB1rBjk3Q+rX4H2EKPj5VmAe4PVlHsU8XuwYUQMZ7/RXJx7nZbEQnJ5kBL9R2ihzyAn53W0/OByeLOOsUQzYE5Xc5f3WVu0LchekF16pWhrg+Zs916jgbeInIAXLBozRFb1lGZmJEAUnKX93NL/jWnPMi+mKOMBUpzMP6tMDC9AUtZiriqWT4nrxzVzxA+Z8QXPrtVX+iFk6ac90G81p0Qi3A4zeGA8BmdnCYihNghI/oIWYtZZ23EBHq/o0deVjQEMZCiwsJDLO8H7yrjIAqdpINedn5QNBcWYr3aqUqGzDB3GMTzCoYQS7oZwhXR0JSkt9y8a6NOvQYnm0mV9jZqZon4WPaD+86NxSAPogZpZwdNhGcQlFJMTTcfhDJa/TMAlIFxJuIdwSKsLjM7G+qh2Y7GIB6CYkF6nu/6rPbabsLg9pfQy0KeLGMu60JBMSTYVaB2w4n5VrQS9qSI5jCtjPYiUJ5x5mSbCeGGdFdGvF+kHdWZXw55mYFHA03k0RlA45omuOK7hQHPMnpzbl9LDlRMcSSfUZzizZQae76mFeaCNkEWn+s6vVlhCu81mI9Bve4MhKygXHyy6fpeZTI6UGvFV3Vet3payUJ4rs07vg+e2slOiDXQcor1zkzc8wdhARt430yxwbRZYbwJM0Tfi6wisDf5j5KqU14YSTMhdxGxKc+9AIGsrxbhRN+VBrHgXCXrcnFCVG3D0GA== X-Forefront-Antispam-Report: CIP:12.22.5.234;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:InfoNoRecords;CAT:NONE;SFS:(13230001)(4636009)(36840700001)(46966006)(40470700004)(2906002)(81166007)(36860700001)(70586007)(70206006)(54906003)(336012)(356005)(86362001)(36756003)(4326008)(40460700003)(47076005)(426003)(2616005)(110136005)(5660300002)(26005)(6666004)(7696005)(186003)(8936002)(508600001)(1076003)(8676002)(82310400005)(316002)(83380400001)(107886003)(36900700001);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2022 13:38:22.9566 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: ce849420-c3c1-42ca-45c4-08da38d3adc0 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[12.22.5.234];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT026.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4896 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Reading statistics could be done intensively and by several processes concurrently. Reader's lock is sufficient in this case. Change reslock from mutex to a rwsem. Suggested-by: Si-Wei Liu Signed-off-by: Eli Cohen --- drivers/vdpa/mlx5/net/mlx5_vnet.c | 41 ++++++++++++++----------------- 1 file changed, 19 insertions(+), 22 deletions(-) diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5= _vnet.c index 2b815ef850c8..57cfc64248b7 100644 --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c @@ -155,7 +155,7 @@ struct mlx5_vdpa_net { * since memory map might change and we need to destroy and create * resources while driver in operational. */ - struct mutex reslock; + struct rw_semaphore reslock; struct mlx5_flow_table *rxft; struct mlx5_fc *rx_counter; struct mlx5_flow_handle *rx_rule_ucast; @@ -1695,7 +1695,7 @@ static void mlx5_cvq_kick_handler(struct work_struct = *work) ndev =3D to_mlx5_vdpa_ndev(mvdev); cvq =3D &mvdev->cvq; =20 - mutex_lock(&ndev->reslock); + down_write(&ndev->reslock); =20 if (!(mvdev->status & VIRTIO_CONFIG_S_DRIVER_OK)) goto out; @@ -1746,7 +1746,7 @@ static void mlx5_cvq_kick_handler(struct work_struct = *work) } =20 out: - mutex_unlock(&ndev->reslock); + up_write(&ndev->reslock); } =20 static void mlx5_vdpa_kick_vq(struct vdpa_device *vdev, u16 idx) @@ -2244,7 +2244,7 @@ static int setup_driver(struct mlx5_vdpa_dev *mvdev) struct mlx5_vdpa_net *ndev =3D to_mlx5_vdpa_ndev(mvdev); int err; =20 - WARN_ON(!mutex_is_locked(&ndev->reslock)); + WARN_ON(!rwsem_is_locked(&ndev->reslock)); =20 if (ndev->setup) { mlx5_vdpa_warn(mvdev, "setup driver called for already setup driver\n"); @@ -2292,7 +2292,7 @@ static int setup_driver(struct mlx5_vdpa_dev *mvdev) static void teardown_driver(struct mlx5_vdpa_net *ndev) { =20 - WARN_ON(!mutex_is_locked(&ndev->reslock)); + WARN_ON(!rwsem_is_locked(&ndev->reslock)); =20 if (!ndev->setup) return; @@ -2322,7 +2322,7 @@ static void mlx5_vdpa_set_status(struct vdpa_device *= vdev, u8 status) =20 print_status(mvdev, status, true); =20 - mutex_lock(&ndev->reslock); + down_write(&ndev->reslock); =20 if ((status ^ ndev->mvdev.status) & VIRTIO_CONFIG_S_DRIVER_OK) { if (status & VIRTIO_CONFIG_S_DRIVER_OK) { @@ -2338,14 +2338,14 @@ static void mlx5_vdpa_set_status(struct vdpa_device= *vdev, u8 status) } =20 ndev->mvdev.status =3D status; - mutex_unlock(&ndev->reslock); + up_write(&ndev->reslock); return; =20 err_setup: mlx5_vdpa_destroy_mr(&ndev->mvdev); ndev->mvdev.status |=3D VIRTIO_CONFIG_S_FAILED; err_clear: - mutex_unlock(&ndev->reslock); + up_write(&ndev->reslock); } =20 static int mlx5_vdpa_reset(struct vdpa_device *vdev) @@ -2356,7 +2356,7 @@ static int mlx5_vdpa_reset(struct vdpa_device *vdev) print_status(mvdev, 0, true); mlx5_vdpa_info(mvdev, "performing device reset\n"); =20 - mutex_lock(&ndev->reslock); + down_write(&ndev->reslock); teardown_driver(ndev); clear_vqs_ready(ndev); mlx5_vdpa_destroy_mr(&ndev->mvdev); @@ -2371,7 +2371,7 @@ static int mlx5_vdpa_reset(struct vdpa_device *vdev) if (mlx5_vdpa_create_mr(mvdev, NULL)) mlx5_vdpa_warn(mvdev, "create MR failed\n"); } - mutex_unlock(&ndev->reslock); + up_write(&ndev->reslock); =20 return 0; } @@ -2411,7 +2411,7 @@ static int mlx5_vdpa_set_map(struct vdpa_device *vdev= , struct vhost_iotlb *iotlb bool change_map; int err; =20 - mutex_lock(&ndev->reslock); + down_write(&ndev->reslock); =20 err =3D mlx5_vdpa_handle_set_map(mvdev, iotlb, &change_map); if (err) { @@ -2423,7 +2423,7 @@ static int mlx5_vdpa_set_map(struct vdpa_device *vdev= , struct vhost_iotlb *iotlb err =3D mlx5_vdpa_change_map(mvdev, iotlb); =20 err: - mutex_unlock(&ndev->reslock); + up_write(&ndev->reslock); return err; } =20 @@ -2442,7 +2442,6 @@ static void mlx5_vdpa_free(struct vdpa_device *vdev) mlx5_mpfs_del_mac(pfmdev, ndev->config.mac); } mlx5_vdpa_free_resources(&ndev->mvdev); - mutex_destroy(&ndev->reslock); kfree(ndev->event_cbs); kfree(ndev->vqs); } @@ -2527,7 +2526,7 @@ static int mlx5_vdpa_get_vendor_vq_stats(struct vdpa_= device *vdev, u16 idx, u64 completed_desc; int err =3D 0; =20 - mutex_lock(&ndev->reslock); + down_read(&ndev->reslock); if (!is_index_valid(mvdev, idx)) { NL_SET_ERR_MSG_MOD(extack, "virtqueue index is not valid"); err =3D -EINVAL; @@ -2566,7 +2565,7 @@ static int mlx5_vdpa_get_vendor_vq_stats(struct vdpa_= device *vdev, u16 idx, =20 err =3D 0; out_err: - mutex_unlock(&ndev->reslock); + up_read(&ndev->reslock); return err; } =20 @@ -2835,18 +2834,18 @@ static int mlx5_vdpa_dev_add(struct vdpa_mgmt_dev *= v_mdev, const char *name, } =20 init_mvqs(ndev); - mutex_init(&ndev->reslock); + init_rwsem(&ndev->reslock); config =3D &ndev->config; =20 if (add_config->mask & BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MTU)) { err =3D config_func_mtu(mdev, add_config->net.mtu); if (err) - goto err_mtu; + goto err_alloc; } =20 err =3D query_mtu(mdev, &mtu); if (err) - goto err_mtu; + goto err_alloc; =20 ndev->config.mtu =3D cpu_to_mlx5vdpa16(mvdev, mtu); =20 @@ -2860,14 +2859,14 @@ static int mlx5_vdpa_dev_add(struct vdpa_mgmt_dev *= v_mdev, const char *name, } else { err =3D mlx5_query_nic_vport_mac_address(mdev, 0, 0, config->mac); if (err) - goto err_mtu; + goto err_alloc; } =20 if (!is_zero_ether_addr(config->mac)) { pfmdev =3D pci_get_drvdata(pci_physfn(mdev->pdev)); err =3D mlx5_mpfs_add_mac(pfmdev, config->mac); if (err) - goto err_mtu; + goto err_alloc; =20 ndev->mvdev.mlx_features |=3D BIT_ULL(VIRTIO_NET_F_MAC); } @@ -2917,8 +2916,6 @@ static int mlx5_vdpa_dev_add(struct vdpa_mgmt_dev *v_= mdev, const char *name, err_mpfs: if (!is_zero_ether_addr(config->mac)) mlx5_mpfs_del_mac(pfmdev, config->mac); -err_mtu: - mutex_destroy(&ndev->reslock); err_alloc: put_device(&mvdev->vdev.dev); return err; --=20 2.35.1