From nobody Fri Nov 22 01:34:29 2024 Received: from NAM04-BN8-obe.outbound.protection.outlook.com (mail-bn8nam04on2066.outbound.protection.outlook.com [40.107.100.66]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 661191C1F14 for ; Mon, 18 Nov 2024 17:30:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.100.66 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731951010; cv=fail; b=iismrrqHyfH94qAIsG3fo41dsH8ylBPrKj6BwOUI8vu1wTKW8HaTT0PWNDl1dJn4cXq1bpYxMhmjmcnu/bL6x/vSjcFUGfXVt24Qz3Y9is8oWRdU/KY/494XOBv2xhOXqkuUQyukP9+Nm+fkDAFGWaTdp6NLzbHuwR2aPksYqZA= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731951010; c=relaxed/simple; bh=jrAWg9zaGazconxYgKy4eFEwEuTYgbCbsXFgqUv3wUo=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=ixtK6aThRkzFQz4806w+emxxj6dpGWc6Fm1M4VazMq8ZAmRTZyezQx+0mn/TGvVZQMswcmRmbSPZIBep7Hhr8m9iT6dbbzu1FQsMcuvCtCxb5uV/fplxc8nYeAjbRb6s82WOBbmBRNPVMXbMDG+uVUkHhSDp00DvMZknie9WGtU= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com; spf=fail smtp.mailfrom=amd.com; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b=35EL+ax6; arc=fail smtp.client-ip=40.107.100.66 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amd.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=amd.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=amd.com header.i=@amd.com header.b="35EL+ax6" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=C9AvSCl3fP5UiXMinTQXAfGKa9/9VtPuE0HnOH9n8YBpU6IdSggpv2MOVr3cCrjE9FZR5dYpDguf93PWcwT6fshporHO2Tc9NuzIHnW1LW6g9UjhepQsBSSgm+fBot8HjmyWaVkU7bbwG0frWHFo8gaq0h+LyGwX2IbsElBnYUlxHqON/lEpf28f0ZtuaVF3gANmdBTUH9k9eq+mQMuPInH5gFCgwvbeHQzaGHwQjlbu5meNLD+t9vX/l6jxAw2tprYAoYNGTxucLNtFVvuMYIcR8dCz0QIXxOU5+0ohGRnciVDmRVEHrEYrjX6zQrQAKpJZ6CkaMNAOl70oapWxDQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=SIi7ornrEgiMnkgue191i/GyzUPatxKKxd3FKc6WtOo=; b=X53+9DyZmh/ff7mC7xso8STKM7etrWOHyHTajTmmolD3kgASgWTJDkeq4b83j8b8+1UtX5/bdzET534ydeFb3hqbevQKtZvN77LbO6CH24l3uVPyt1XeWAjGUbZE9WrtPbZ1e/xnp8IUc2DyC/96d2lsBpM316RythBKf0GIKceilCdnttDs/z+vZvQfrWpkd5T2Zzu+SgIJmCRWIsz4fmK4rGABGv0/+Ooo9vHCSPrGxGLN+hZAejTKo99LXch136fPs/7AheDkmCaEJMdpl3yYLHOcqFepHXjZHgmW3byc4LCi7QStQCIldjZU2i9EGQmahug6/TGuOBzrogI62Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.12) smtp.rcpttodomain=kernel.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=SIi7ornrEgiMnkgue191i/GyzUPatxKKxd3FKc6WtOo=; b=35EL+ax67pPcq4NffS8EkHHITqhxVd4NsIUvfSs+3lCpnd6ce+b5U9/2s61yFN3uw9hTfGy4tqgOmTgNhEGU0qmKYHH5F6LjfwIVR8N3CbEHaCstTGuSVIC7zsTuHmhHi0IE9+ZXe7r0m10C661t53Xv9pgFgocqYqEAfFEvEK4= Received: from DS7P222CA0004.NAMP222.PROD.OUTLOOK.COM (2603:10b6:8:2e::14) by CYYPR12MB8924.namprd12.prod.outlook.com (2603:10b6:930:bd::5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8158.21; Mon, 18 Nov 2024 17:30:00 +0000 Received: from DS1PEPF0001709C.namprd05.prod.outlook.com (2603:10b6:8:2e:cafe::b1) by DS7P222CA0004.outlook.office365.com (2603:10b6:8:2e::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8158.23 via Frontend Transport; Mon, 18 Nov 2024 17:30:00 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.12) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.12 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.12; helo=SATLEXMB03.amd.com; pr=C Received: from SATLEXMB03.amd.com (165.204.84.12) by DS1PEPF0001709C.mail.protection.outlook.com (10.167.18.106) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.8158.14 via Frontend Transport; Mon, 18 Nov 2024 17:30:00 +0000 Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Mon, 18 Nov 2024 11:29:57 -0600 Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB05.amd.com (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Mon, 18 Nov 2024 11:29:57 -0600 Received: from xsjlizhih51.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server id 15.1.2507.39 via Frontend Transport; Mon, 18 Nov 2024 11:29:56 -0600 From: Lizhi Hou To: , , CC: Lizhi Hou , , , , , Subject: [PATCH V11 05/10] accel/amdxdna: Add hardware context Date: Mon, 18 Nov 2024 09:29:37 -0800 Message-ID: <20241118172942.2014541-6-lizhi.hou@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241118172942.2014541-1-lizhi.hou@amd.com> References: <20241118172942.2014541-1-lizhi.hou@amd.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: None (SATLEXMB05.amd.com: lizhi.hou@amd.com does not designate permitted sender hosts) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS1PEPF0001709C:EE_|CYYPR12MB8924:EE_ X-MS-Office365-Filtering-Correlation-Id: de3ac09f-3d5e-4503-a476-08dd07f6a148 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|82310400026|376014|36860700013; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?CEbXe3vPME6uc/ibt5mglUmU4WIaeR/iXOBtlxoP/wn9VUC6iBxIB92Zwv9e?= =?us-ascii?Q?I+G1wC8C1wwHtnzqIoDQm81kYz/a0ugUGZlYOMvSLmMclj4pQDT0EjihXItd?= =?us-ascii?Q?Tvw3Y9OaNVG4hbuxvE1Njs2sCqMru4kGYUJftl//aylKmobKUcRJYf8HWVZc?= =?us-ascii?Q?/L4bZE0+jgKseTwiotv1Qh1VR0wg3cM0BIWWMee9kWGDbG9ahkiyT/bsMDwm?= =?us-ascii?Q?/M5MfEdPMZIKeSJ4PM9n0vBPVbw7cwzbewtDvzn2TyZFQ74QMru200vu+3MU?= =?us-ascii?Q?DLUWCwZAESvKCyIyHURcfyRLk9Cquq5ewNgwvFw9/+/XQzj/YqjP1kKcUbLA?= =?us-ascii?Q?WiwCIpHm3BOST9OWbtQGcnTBrBd3g92/BRNV2Qp0QdgT5vF9nb9wa6vdxthA?= =?us-ascii?Q?XuBHke+nOFEdqb0xJ5oyWaHxv+POh41AnEqK87BlOmjBRFDptuFdaIIkjql9?= =?us-ascii?Q?+wdnAfOr9oSlL0Aoa++2U+FZwnOEujKZD4lwdf+yHRFX9ArY41s9cI/qweqR?= =?us-ascii?Q?UfTLezjo0v3kLEtWwT/43+1bH9S/fhTUwpXUs51fjf/LBUWZrXtDNIXJqgih?= =?us-ascii?Q?U639y8q2D5AwdEs5vNIZWj6mzY+XybFikGJBJ5ynkMlqHrtDaSxU9iMocDhM?= =?us-ascii?Q?AUa0mnR49tKSfrNSMDiLbts1Fdw5Hd2/QM+bWD7TMGf6edVyaQ5obWZi0ZLk?= =?us-ascii?Q?ZWNoxCAC3xzjydU9Qi2f7yNEC/nP88Ab0sK+9+mnX55KPdeC58uiz9/px2sU?= =?us-ascii?Q?2SMWYPNtiaPwj7iBnhH+tV25JYUKjdV6ZejUrwj2hbuRrQNEk8aB+99Domn1?= =?us-ascii?Q?H0SrWaYKv9OhOhOAT8rzDd0XIThjfgh4w0vyU1NAv80GbWgQIKzeBd4+8EGV?= =?us-ascii?Q?McBN8ymhs1rORjCyDvI/DIp6KRBMSqkh/AeqewbfUkv74IXQ1a9EyIzWGpM1?= =?us-ascii?Q?19h5YsQo3ukcvJ66qlLP64HKY5SSPdV9rlgmMaK4rAdzD5rh44psJM1cRZgq?= =?us-ascii?Q?e9TrChnLtHXicbIWVSv9otFlV0kfelVZeCnm7OV0eMiavyosgZ0OlOKJ3JZQ?= =?us-ascii?Q?HviIRq6JAX7X2rT2wYj63f8iQ0jGyhLdByKrPPbSHK5jF9wm4yN5+qwyiOQG?= =?us-ascii?Q?YiIGNE2+kvbKbpI61u89gA2GC2BHBihwCh55PplyDhzctfHQymhlL0/br1K0?= =?us-ascii?Q?chzxeVjnDpOHILRSeLsBo4e0lZeWTDwUtN6b7uRK3Qg1n0r7cxPxMcVvTXUv?= =?us-ascii?Q?bHH+CNHzSPTud4J+Mb5ZTovT1IF1mJjWX+gmD5SzetUCX6O0mcwqdvS0KUYx?= =?us-ascii?Q?YDLVTzp36juQeSwZAFoaNLgwkl4Tm9lt5jENr2umzEu/hFQD0Y8xlsAysl8l?= =?us-ascii?Q?AOtvF+k2yPRVaoT0wzODqRkTUkbRLSSRRl+PHn37piJhYtxceg=3D=3D?= X-Forefront-Antispam-Report: CIP:165.204.84.12;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:atlvpn-bp.amd.com;CAT:NONE;SFS:(13230040)(1800799024)(82310400026)(376014)(36860700013);DIR:OUT;SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Nov 2024 17:30:00.4157 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: de3ac09f-3d5e-4503-a476-08dd07f6a148 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.12];Helo=[SATLEXMB03.amd.com] X-MS-Exchange-CrossTenant-AuthSource: DS1PEPF0001709C.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CYYPR12MB8924 Content-Type: text/plain; charset="utf-8" The hardware can be shared among multiple user applications. The hardware resources are allocated/freed based on the request from user application via driver IOCTLs. DRM_IOCTL_AMDXDNA_CREATE_HWCTX Allocate tile columns and create a hardware context structure to track the usage and status of the resources. A hardware context ID is returned for XDNA command execution. DRM_IOCTL_AMDXDNA_DESTROY_HWCTX Release hardware context based on its ID. The tile columns belong to this hardware context will be reclaimed. DRM_IOCTL_AMDXDNA_CONFIG_HWCTX Config hardware context. Bind the hardware context to the required resources. Co-developed-by: Min Ma Signed-off-by: Min Ma Reviewed-by: Jeffrey Hugo Signed-off-by: Lizhi Hou --- drivers/accel/amdxdna/Makefile | 2 + drivers/accel/amdxdna/aie2_ctx.c | 186 ++++++++++++++++++++ drivers/accel/amdxdna/aie2_message.c | 90 ++++++++++ drivers/accel/amdxdna/aie2_pci.c | 43 +++++ drivers/accel/amdxdna/aie2_pci.h | 13 ++ drivers/accel/amdxdna/amdxdna_ctx.c | 219 ++++++++++++++++++++++++ drivers/accel/amdxdna/amdxdna_ctx.h | 39 +++++ drivers/accel/amdxdna/amdxdna_pci_drv.c | 125 +++++++++++++- drivers/accel/amdxdna/amdxdna_pci_drv.h | 20 +++ include/uapi/drm/amdxdna_accel.h | 131 ++++++++++++++ 10 files changed, 867 insertions(+), 1 deletion(-) create mode 100644 drivers/accel/amdxdna/aie2_ctx.c create mode 100644 drivers/accel/amdxdna/amdxdna_ctx.c create mode 100644 drivers/accel/amdxdna/amdxdna_ctx.h diff --git a/drivers/accel/amdxdna/Makefile b/drivers/accel/amdxdna/Makefile index 39d3404fbc8f..c86c90dfd303 100644 --- a/drivers/accel/amdxdna/Makefile +++ b/drivers/accel/amdxdna/Makefile @@ -1,11 +1,13 @@ # SPDX-License-Identifier: GPL-2.0-only =20 amdxdna-y :=3D \ + aie2_ctx.o \ aie2_message.o \ aie2_pci.o \ aie2_psp.o \ aie2_smu.o \ aie2_solver.o \ + amdxdna_ctx.o \ amdxdna_mailbox.o \ amdxdna_mailbox_helper.o \ amdxdna_pci_drv.o \ diff --git a/drivers/accel/amdxdna/aie2_ctx.c b/drivers/accel/amdxdna/aie2_= ctx.c new file mode 100644 index 000000000000..022b2b0b015d --- /dev/null +++ b/drivers/accel/amdxdna/aie2_ctx.c @@ -0,0 +1,186 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2024, Advanced Micro Devices, Inc. + */ + +#include +#include +#include +#include + +#include "aie2_pci.h" +#include "aie2_solver.h" +#include "amdxdna_ctx.h" +#include "amdxdna_mailbox.h" +#include "amdxdna_pci_drv.h" + +static int aie2_hwctx_col_list(struct amdxdna_hwctx *hwctx) +{ + struct amdxdna_dev *xdna =3D hwctx->client->xdna; + struct amdxdna_dev_hdl *ndev; + int start, end, first, last; + u32 width =3D 1, entries =3D 0; + int i; + + if (!hwctx->num_tiles) { + XDNA_ERR(xdna, "Number of tiles is zero"); + return -EINVAL; + } + + ndev =3D xdna->dev_handle; + if (unlikely(!ndev->metadata.core.row_count)) { + XDNA_WARN(xdna, "Core tile row count is zero"); + return -EINVAL; + } + + hwctx->num_col =3D hwctx->num_tiles / ndev->metadata.core.row_count; + if (!hwctx->num_col || hwctx->num_col > ndev->total_col) { + XDNA_ERR(xdna, "Invalid num_col %d", hwctx->num_col); + return -EINVAL; + } + + if (ndev->priv->col_align =3D=3D COL_ALIGN_NATURE) + width =3D hwctx->num_col; + + /* + * In range [start, end], find out columns that is multiple of width. + * 'first' is the first column, + * 'last' is the last column, + * 'entries' is the total number of columns. + */ + start =3D xdna->dev_info->first_col; + end =3D ndev->total_col - hwctx->num_col; + if (start > 0 && end =3D=3D 0) { + XDNA_DBG(xdna, "Force start from col 0"); + start =3D 0; + } + first =3D start + (width - start % width) % width; + last =3D end - end % width; + if (last >=3D first) + entries =3D (last - first) / width + 1; + XDNA_DBG(xdna, "start %d end %d first %d last %d", + start, end, first, last); + + if (unlikely(!entries)) { + XDNA_ERR(xdna, "Start %d end %d width %d", + start, end, width); + return -EINVAL; + } + + hwctx->col_list =3D kmalloc_array(entries, sizeof(*hwctx->col_list), GFP_= KERNEL); + if (!hwctx->col_list) + return -ENOMEM; + + hwctx->col_list_len =3D entries; + hwctx->col_list[0] =3D first; + for (i =3D 1; i < entries; i++) + hwctx->col_list[i] =3D hwctx->col_list[i - 1] + width; + + print_hex_dump_debug("col_list: ", DUMP_PREFIX_OFFSET, 16, 4, hwctx->col_= list, + entries * sizeof(*hwctx->col_list), false); + return 0; +} + +static int aie2_alloc_resource(struct amdxdna_hwctx *hwctx) +{ + struct amdxdna_dev *xdna =3D hwctx->client->xdna; + struct alloc_requests *xrs_req; + int ret; + + xrs_req =3D kzalloc(sizeof(*xrs_req), GFP_KERNEL); + if (!xrs_req) + return -ENOMEM; + + xrs_req->cdo.start_cols =3D hwctx->col_list; + xrs_req->cdo.cols_len =3D hwctx->col_list_len; + xrs_req->cdo.ncols =3D hwctx->num_col; + xrs_req->cdo.qos_cap.opc =3D hwctx->max_opc; + + xrs_req->rqos.gops =3D hwctx->qos.gops; + xrs_req->rqos.fps =3D hwctx->qos.fps; + xrs_req->rqos.dma_bw =3D hwctx->qos.dma_bandwidth; + xrs_req->rqos.latency =3D hwctx->qos.latency; + xrs_req->rqos.exec_time =3D hwctx->qos.frame_exec_time; + xrs_req->rqos.priority =3D hwctx->qos.priority; + + xrs_req->rid =3D (uintptr_t)hwctx; + + ret =3D xrs_allocate_resource(xdna->xrs_hdl, xrs_req, hwctx); + if (ret) + XDNA_ERR(xdna, "Allocate AIE resource failed, ret %d", ret); + + kfree(xrs_req); + return ret; +} + +static void aie2_release_resource(struct amdxdna_hwctx *hwctx) +{ + struct amdxdna_dev *xdna =3D hwctx->client->xdna; + int ret; + + ret =3D xrs_release_resource(xdna->xrs_hdl, (uintptr_t)hwctx); + if (ret) + XDNA_ERR(xdna, "Release AIE resource failed, ret %d", ret); +} + +int aie2_hwctx_init(struct amdxdna_hwctx *hwctx) +{ + struct amdxdna_client *client =3D hwctx->client; + struct amdxdna_dev *xdna =3D client->xdna; + struct amdxdna_hwctx_priv *priv; + int ret; + + priv =3D kzalloc(sizeof(*hwctx->priv), GFP_KERNEL); + if (!priv) + return -ENOMEM; + hwctx->priv =3D priv; + + ret =3D aie2_hwctx_col_list(hwctx); + if (ret) { + XDNA_ERR(xdna, "Create col list failed, ret %d", ret); + goto free_priv; + } + + ret =3D aie2_alloc_resource(hwctx); + if (ret) { + XDNA_ERR(xdna, "Alloc hw resource failed, ret %d", ret); + goto free_col_list; + } + + hwctx->status =3D HWCTX_STAT_INIT; + + XDNA_DBG(xdna, "hwctx %s init completed", hwctx->name); + + return 0; + +free_col_list: + kfree(hwctx->col_list); +free_priv: + kfree(priv); + return ret; +} + +void aie2_hwctx_fini(struct amdxdna_hwctx *hwctx) +{ + aie2_release_resource(hwctx); + + kfree(hwctx->col_list); + kfree(hwctx->priv); + kfree(hwctx->cus); +} + +int aie2_hwctx_config(struct amdxdna_hwctx *hwctx, u32 type, u64 value, vo= id *buf, u32 size) +{ + struct amdxdna_dev *xdna =3D hwctx->client->xdna; + + drm_WARN_ON(&xdna->ddev, !mutex_is_locked(&xdna->dev_lock)); + switch (type) { + case DRM_AMDXDNA_HWCTX_CONFIG_CU: + case DRM_AMDXDNA_HWCTX_ASSIGN_DBG_BUF: + case DRM_AMDXDNA_HWCTX_REMOVE_DBG_BUF: + return -EOPNOTSUPP; + default: + XDNA_DBG(xdna, "Not supported type %d", type); + return -EOPNOTSUPP; + } +} diff --git a/drivers/accel/amdxdna/aie2_message.c b/drivers/accel/amdxdna/a= ie2_message.c index cbf8ee54c6c2..4b8a71bf4fae 100644 --- a/drivers/accel/amdxdna/aie2_message.c +++ b/drivers/accel/amdxdna/aie2_message.c @@ -3,13 +3,16 @@ * Copyright (C) 2023-2024, Advanced Micro Devices, Inc. */ =20 +#include #include #include #include +#include #include =20 #include "aie2_msg_priv.h" #include "aie2_pci.h" +#include "amdxdna_ctx.h" #include "amdxdna_mailbox.h" #include "amdxdna_mailbox_helper.h" #include "amdxdna_pci_drv.h" @@ -192,3 +195,90 @@ int aie2_query_firmware_version(struct amdxdna_dev_hdl= *ndev, =20 return 0; } + +int aie2_create_context(struct amdxdna_dev_hdl *ndev, struct amdxdna_hwctx= *hwctx) +{ + DECLARE_AIE2_MSG(create_ctx, MSG_OP_CREATE_CONTEXT); + struct amdxdna_dev *xdna =3D ndev->xdna; + struct xdna_mailbox_chann_res x2i; + struct xdna_mailbox_chann_res i2x; + struct cq_pair *cq_pair; + u32 intr_reg; + int ret; + + req.aie_type =3D 1; + req.start_col =3D hwctx->start_col; + req.num_col =3D hwctx->num_col; + req.num_cq_pairs_requested =3D 1; + req.pasid =3D hwctx->client->pasid; + req.context_priority =3D 2; + + ret =3D aie2_send_mgmt_msg_wait(ndev, &msg); + if (ret) + return ret; + + hwctx->fw_ctx_id =3D resp.context_id; + WARN_ONCE(hwctx->fw_ctx_id =3D=3D -1, "Unexpected context id"); + + cq_pair =3D &resp.cq_pair[0]; + x2i.mb_head_ptr_reg =3D AIE2_MBOX_OFF(ndev, cq_pair->x2i_q.head_addr); + x2i.mb_tail_ptr_reg =3D AIE2_MBOX_OFF(ndev, cq_pair->x2i_q.tail_addr); + x2i.rb_start_addr =3D AIE2_SRAM_OFF(ndev, cq_pair->x2i_q.buf_addr); + x2i.rb_size =3D cq_pair->x2i_q.buf_size; + + i2x.mb_head_ptr_reg =3D AIE2_MBOX_OFF(ndev, cq_pair->i2x_q.head_addr); + i2x.mb_tail_ptr_reg =3D AIE2_MBOX_OFF(ndev, cq_pair->i2x_q.tail_addr); + i2x.rb_start_addr =3D AIE2_SRAM_OFF(ndev, cq_pair->i2x_q.buf_addr); + i2x.rb_size =3D cq_pair->i2x_q.buf_size; + + ret =3D pci_irq_vector(to_pci_dev(xdna->ddev.dev), resp.msix_id); + if (ret =3D=3D -EINVAL) { + XDNA_ERR(xdna, "not able to create channel"); + goto out_destroy_context; + } + + intr_reg =3D i2x.mb_head_ptr_reg + 4; + hwctx->priv->mbox_chann =3D xdna_mailbox_create_channel(ndev->mbox, &x2i,= &i2x, + intr_reg, ret); + if (!hwctx->priv->mbox_chann) { + XDNA_ERR(xdna, "not able to create channel"); + ret =3D -EINVAL; + goto out_destroy_context; + } + + XDNA_DBG(xdna, "%s mailbox channel irq: %d, msix_id: %d", + hwctx->name, ret, resp.msix_id); + XDNA_DBG(xdna, "%s created fw ctx %d pasid %d", hwctx->name, + hwctx->fw_ctx_id, hwctx->client->pasid); + + return 0; + +out_destroy_context: + aie2_destroy_context(ndev, hwctx); + return ret; +} + +int aie2_destroy_context(struct amdxdna_dev_hdl *ndev, struct amdxdna_hwct= x *hwctx) +{ + DECLARE_AIE2_MSG(destroy_ctx, MSG_OP_DESTROY_CONTEXT); + struct amdxdna_dev *xdna =3D ndev->xdna; + int ret; + + if (hwctx->fw_ctx_id =3D=3D -1) + return 0; + + xdna_mailbox_stop_channel(hwctx->priv->mbox_chann); + + req.context_id =3D hwctx->fw_ctx_id; + ret =3D aie2_send_mgmt_msg_wait(ndev, &msg); + if (ret) + XDNA_WARN(xdna, "%s destroy context failed, ret %d", hwctx->name, ret); + + xdna_mailbox_destroy_channel(hwctx->priv->mbox_chann); + XDNA_DBG(xdna, "%s destroyed fw ctx %d", hwctx->name, + hwctx->fw_ctx_id); + hwctx->priv->mbox_chann =3D NULL; + hwctx->fw_ctx_id =3D -1; + + return ret; +} diff --git a/drivers/accel/amdxdna/aie2_pci.c b/drivers/accel/amdxdna/aie2_= pci.c index ce0822238b11..6181854c799c 100644 --- a/drivers/accel/amdxdna/aie2_pci.c +++ b/drivers/accel/amdxdna/aie2_pci.c @@ -3,6 +3,7 @@ * Copyright (C) 2023-2024, Advanced Micro Devices, Inc. */ =20 +#include #include #include #include @@ -15,6 +16,7 @@ #include "aie2_msg_priv.h" #include "aie2_pci.h" #include "aie2_solver.h" +#include "amdxdna_ctx.h" #include "amdxdna_mailbox.h" #include "amdxdna_pci_drv.h" =20 @@ -210,6 +212,43 @@ static void aie2_mgmt_fw_fini(struct amdxdna_dev_hdl *= ndev) XDNA_DBG(ndev->xdna, "Firmware suspended"); } =20 +static int aie2_xrs_load(void *cb_arg, struct xrs_action_load *action) +{ + struct amdxdna_hwctx *hwctx =3D cb_arg; + struct amdxdna_dev *xdna; + int ret; + + xdna =3D hwctx->client->xdna; + + hwctx->start_col =3D action->part.start_col; + hwctx->num_col =3D action->part.ncols; + ret =3D aie2_create_context(xdna->dev_handle, hwctx); + if (ret) + XDNA_ERR(xdna, "create context failed, ret %d", ret); + + return ret; +} + +static int aie2_xrs_unload(void *cb_arg) +{ + struct amdxdna_hwctx *hwctx =3D cb_arg; + struct amdxdna_dev *xdna; + int ret; + + xdna =3D hwctx->client->xdna; + + ret =3D aie2_destroy_context(xdna->dev_handle, hwctx); + if (ret) + XDNA_ERR(xdna, "destroy context failed, ret %d", ret); + + return ret; +} + +static struct xrs_action_ops aie2_xrs_actions =3D { + .load =3D aie2_xrs_load, + .unload =3D aie2_xrs_unload, +}; + static void aie2_hw_stop(struct amdxdna_dev *xdna) { struct pci_dev *pdev =3D to_pci_dev(xdna->ddev.dev); @@ -417,6 +456,7 @@ static int aie2_init(struct amdxdna_dev *xdna) xrs_cfg.clk_list.cu_clk_list[2] =3D 1000; xrs_cfg.sys_eff_factor =3D 1; xrs_cfg.ddev =3D &xdna->ddev; + xrs_cfg.actions =3D &aie2_xrs_actions; xrs_cfg.total_col =3D ndev->total_col; =20 xdna->xrs_hdl =3D xrsm_init(&xrs_cfg); @@ -453,4 +493,7 @@ static void aie2_fini(struct amdxdna_dev *xdna) const struct amdxdna_dev_ops aie2_ops =3D { .init =3D aie2_init, .fini =3D aie2_fini, + .hwctx_init =3D aie2_hwctx_init, + .hwctx_fini =3D aie2_hwctx_fini, + .hwctx_config =3D aie2_hwctx_config, }; diff --git a/drivers/accel/amdxdna/aie2_pci.h b/drivers/accel/amdxdna/aie2_= pci.h index 4c81d10a0998..b789286bc9d4 100644 --- a/drivers/accel/amdxdna/aie2_pci.h +++ b/drivers/accel/amdxdna/aie2_pci.h @@ -77,6 +77,7 @@ enum psp_reg_idx { }; =20 struct amdxdna_fw_ver; +struct amdxdna_hwctx; =20 struct psp_config { const void *fw_buf; @@ -117,6 +118,10 @@ struct rt_config { u32 value; }; =20 +struct amdxdna_hwctx_priv { + void *mbox_chann; +}; + struct amdxdna_dev_hdl { struct amdxdna_dev *xdna; const struct amdxdna_dev_priv *priv; @@ -189,4 +194,12 @@ int aie2_query_aie_version(struct amdxdna_dev_hdl *nde= v, struct aie_version *ver int aie2_query_aie_metadata(struct amdxdna_dev_hdl *ndev, struct aie_metad= ata *metadata); int aie2_query_firmware_version(struct amdxdna_dev_hdl *ndev, struct amdxdna_fw_ver *fw_ver); +int aie2_create_context(struct amdxdna_dev_hdl *ndev, struct amdxdna_hwctx= *hwctx); +int aie2_destroy_context(struct amdxdna_dev_hdl *ndev, struct amdxdna_hwct= x *hwctx); + +/* aie2_hwctx.c */ +int aie2_hwctx_init(struct amdxdna_hwctx *hwctx); +void aie2_hwctx_fini(struct amdxdna_hwctx *hwctx); +int aie2_hwctx_config(struct amdxdna_hwctx *hwctx, u32 type, u64 value, vo= id *buf, u32 size); + #endif /* _AIE2_PCI_H_ */ diff --git a/drivers/accel/amdxdna/amdxdna_ctx.c b/drivers/accel/amdxdna/am= dxdna_ctx.c new file mode 100644 index 000000000000..9489399adea1 --- /dev/null +++ b/drivers/accel/amdxdna/amdxdna_ctx.c @@ -0,0 +1,219 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2022-2024, Advanced Micro Devices, Inc. + */ + +#include +#include +#include +#include +#include + +#include "amdxdna_ctx.h" +#include "amdxdna_pci_drv.h" + +#define MAX_HWCTX_ID 255 + +static void amdxdna_hwctx_destroy(struct amdxdna_hwctx *hwctx) +{ + struct amdxdna_dev *xdna =3D hwctx->client->xdna; + + /* At this point, user is not able to submit new commands */ + mutex_lock(&xdna->dev_lock); + xdna->dev_info->ops->hwctx_fini(hwctx); + mutex_unlock(&xdna->dev_lock); + + kfree(hwctx->name); + kfree(hwctx); +} + +/* + * This should be called in close() and remove(). DO NOT call in other sys= calls. + * This guarantee that when hwctx and resources will be released, if user + * doesn't call amdxdna_drm_destroy_hwctx_ioctl. + */ +void amdxdna_hwctx_remove_all(struct amdxdna_client *client) +{ + struct amdxdna_hwctx *hwctx; + int next =3D 0; + + mutex_lock(&client->hwctx_lock); + idr_for_each_entry_continue(&client->hwctx_idr, hwctx, next) { + XDNA_DBG(client->xdna, "PID %d close HW context %d", + client->pid, hwctx->id); + idr_remove(&client->hwctx_idr, hwctx->id); + mutex_unlock(&client->hwctx_lock); + amdxdna_hwctx_destroy(hwctx); + mutex_lock(&client->hwctx_lock); + } + mutex_unlock(&client->hwctx_lock); +} + +int amdxdna_drm_create_hwctx_ioctl(struct drm_device *dev, void *data, str= uct drm_file *filp) +{ + struct amdxdna_client *client =3D filp->driver_priv; + struct amdxdna_drm_create_hwctx *args =3D data; + struct amdxdna_dev *xdna =3D to_xdna_dev(dev); + struct amdxdna_hwctx *hwctx; + int ret, idx; + + if (args->ext || args->ext_flags) + return -EINVAL; + + if (!drm_dev_enter(dev, &idx)) + return -ENODEV; + + hwctx =3D kzalloc(sizeof(*hwctx), GFP_KERNEL); + if (!hwctx) { + ret =3D -ENOMEM; + goto exit; + } + + if (copy_from_user(&hwctx->qos, u64_to_user_ptr(args->qos_p), sizeof(hwct= x->qos))) { + XDNA_ERR(xdna, "Access QoS info failed"); + ret =3D -EFAULT; + goto free_hwctx; + } + + hwctx->client =3D client; + hwctx->fw_ctx_id =3D -1; + hwctx->num_tiles =3D args->num_tiles; + hwctx->mem_size =3D args->mem_size; + hwctx->max_opc =3D args->max_opc; + mutex_lock(&client->hwctx_lock); + ret =3D idr_alloc_cyclic(&client->hwctx_idr, hwctx, 0, MAX_HWCTX_ID, GFP_= KERNEL); + if (ret < 0) { + mutex_unlock(&client->hwctx_lock); + XDNA_ERR(xdna, "Allocate hwctx ID failed, ret %d", ret); + goto free_hwctx; + } + hwctx->id =3D ret; + mutex_unlock(&client->hwctx_lock); + + hwctx->name =3D kasprintf(GFP_KERNEL, "hwctx.%d.%d", client->pid, hwctx->= id); + if (!hwctx->name) { + ret =3D -ENOMEM; + goto rm_id; + } + + mutex_lock(&xdna->dev_lock); + ret =3D xdna->dev_info->ops->hwctx_init(hwctx); + if (ret) { + mutex_unlock(&xdna->dev_lock); + XDNA_ERR(xdna, "Init hwctx failed, ret %d", ret); + goto free_name; + } + args->handle =3D hwctx->id; + args->syncobj_handle =3D hwctx->syncobj_hdl; + mutex_unlock(&xdna->dev_lock); + + XDNA_DBG(xdna, "PID %d create HW context %d, ret %d", client->pid, args->= handle, ret); + drm_dev_exit(idx); + return 0; + +free_name: + kfree(hwctx->name); +rm_id: + mutex_lock(&client->hwctx_lock); + idr_remove(&client->hwctx_idr, hwctx->id); + mutex_unlock(&client->hwctx_lock); +free_hwctx: + kfree(hwctx); +exit: + drm_dev_exit(idx); + return ret; +} + +int amdxdna_drm_destroy_hwctx_ioctl(struct drm_device *dev, void *data, st= ruct drm_file *filp) +{ + struct amdxdna_client *client =3D filp->driver_priv; + struct amdxdna_drm_destroy_hwctx *args =3D data; + struct amdxdna_dev *xdna =3D to_xdna_dev(dev); + struct amdxdna_hwctx *hwctx; + int ret =3D 0, idx; + + if (!drm_dev_enter(dev, &idx)) + return -ENODEV; + + mutex_lock(&client->hwctx_lock); + hwctx =3D idr_find(&client->hwctx_idr, args->handle); + if (!hwctx) { + mutex_unlock(&client->hwctx_lock); + ret =3D -EINVAL; + XDNA_DBG(xdna, "PID %d HW context %d not exist", + client->pid, args->handle); + goto out; + } + idr_remove(&client->hwctx_idr, hwctx->id); + mutex_unlock(&client->hwctx_lock); + + amdxdna_hwctx_destroy(hwctx); + + XDNA_DBG(xdna, "PID %d destroyed HW context %d", client->pid, args->handl= e); +out: + drm_dev_exit(idx); + return ret; +} + +int amdxdna_drm_config_hwctx_ioctl(struct drm_device *dev, void *data, str= uct drm_file *filp) +{ + struct amdxdna_client *client =3D filp->driver_priv; + struct amdxdna_drm_config_hwctx *args =3D data; + struct amdxdna_dev *xdna =3D to_xdna_dev(dev); + struct amdxdna_hwctx *hwctx; + u32 buf_size; + void *buf; + u64 val; + int ret; + + if (!xdna->dev_info->ops->hwctx_config) + return -EOPNOTSUPP; + + val =3D args->param_val; + buf_size =3D args->param_val_size; + + switch (args->param_type) { + case DRM_AMDXDNA_HWCTX_CONFIG_CU: + /* For those types that param_val is pointer */ + if (buf_size > PAGE_SIZE) { + XDNA_ERR(xdna, "Config CU param buffer too large"); + return -E2BIG; + } + + /* Hwctx needs to keep buf */ + buf =3D kzalloc(PAGE_SIZE, GFP_KERNEL); + if (!buf) + return -ENOMEM; + + if (copy_from_user(buf, u64_to_user_ptr(val), buf_size)) { + kfree(buf); + return -EFAULT; + } + + break; + case DRM_AMDXDNA_HWCTX_ASSIGN_DBG_BUF: + case DRM_AMDXDNA_HWCTX_REMOVE_DBG_BUF: + /* For those types that param_val is a value */ + buf =3D NULL; + buf_size =3D 0; + break; + default: + XDNA_DBG(xdna, "Unknown HW context config type %d", args->param_type); + return -EINVAL; + } + + mutex_lock(&xdna->dev_lock); + hwctx =3D idr_find(&client->hwctx_idr, args->handle); + if (!hwctx) { + XDNA_DBG(xdna, "PID %d failed to get hwctx %d", client->pid, args->handl= e); + ret =3D -EINVAL; + goto unlock; + } + + ret =3D xdna->dev_info->ops->hwctx_config(hwctx, args->param_type, val, b= uf, buf_size); + +unlock: + mutex_unlock(&xdna->dev_lock); + kfree(buf); + return ret; +} diff --git a/drivers/accel/amdxdna/amdxdna_ctx.h b/drivers/accel/amdxdna/am= dxdna_ctx.h new file mode 100644 index 000000000000..00b96cf2e9a7 --- /dev/null +++ b/drivers/accel/amdxdna/amdxdna_ctx.h @@ -0,0 +1,39 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2022-2024, Advanced Micro Devices, Inc. + */ + +#ifndef _AMDXDNA_CTX_H_ +#define _AMDXDNA_CTX_H_ + +struct amdxdna_hwctx { + struct amdxdna_client *client; + struct amdxdna_hwctx_priv *priv; + char *name; + + u32 id; + u32 max_opc; + u32 num_tiles; + u32 mem_size; + u32 fw_ctx_id; + u32 col_list_len; + u32 *col_list; + u32 start_col; + u32 num_col; +#define HWCTX_STAT_INIT 0 +#define HWCTX_STAT_READY 1 +#define HWCTX_STAT_STOP 2 + u32 status; + u32 old_status; + + struct amdxdna_qos_info qos; + struct amdxdna_hwctx_param_config_cu *cus; + u32 syncobj_hdl; +}; + +void amdxdna_hwctx_remove_all(struct amdxdna_client *client); +int amdxdna_drm_create_hwctx_ioctl(struct drm_device *dev, void *data, str= uct drm_file *filp); +int amdxdna_drm_config_hwctx_ioctl(struct drm_device *dev, void *data, str= uct drm_file *filp); +int amdxdna_drm_destroy_hwctx_ioctl(struct drm_device *dev, void *data, st= ruct drm_file *filp); + +#endif /* _AMDXDNA_CTX_H_ */ diff --git a/drivers/accel/amdxdna/amdxdna_pci_drv.c b/drivers/accel/amdxdn= a/amdxdna_pci_drv.c index b8caf323a0c6..dfe682df5640 100644 --- a/drivers/accel/amdxdna/amdxdna_pci_drv.c +++ b/drivers/accel/amdxdna/amdxdna_pci_drv.c @@ -3,13 +3,16 @@ * Copyright (C) 2022-2024, Advanced Micro Devices, Inc. */ =20 +#include #include #include #include #include #include +#include #include =20 +#include "amdxdna_ctx.h" #include "amdxdna_pci_drv.h" =20 /* @@ -33,7 +36,108 @@ static const struct amdxdna_device_id amdxdna_ids[] =3D= { {0} }; =20 -DEFINE_DRM_ACCEL_FOPS(amdxdna_fops); +static int amdxdna_drm_open(struct drm_device *ddev, struct drm_file *filp) +{ + struct amdxdna_dev *xdna =3D to_xdna_dev(ddev); + struct amdxdna_client *client; + int ret; + + client =3D kzalloc(sizeof(*client), GFP_KERNEL); + if (!client) + return -ENOMEM; + + client->pid =3D pid_nr(filp->pid); + client->xdna =3D xdna; + + client->sva =3D iommu_sva_bind_device(xdna->ddev.dev, current->mm); + if (IS_ERR(client->sva)) { + ret =3D PTR_ERR(client->sva); + XDNA_ERR(xdna, "SVA bind device failed, ret %d", ret); + goto failed; + } + client->pasid =3D iommu_sva_get_pasid(client->sva); + if (client->pasid =3D=3D IOMMU_PASID_INVALID) { + XDNA_ERR(xdna, "SVA get pasid failed"); + ret =3D -ENODEV; + goto unbind_sva; + } + mutex_init(&client->hwctx_lock); + idr_init_base(&client->hwctx_idr, AMDXDNA_INVALID_CTX_HANDLE + 1); + + mutex_lock(&xdna->dev_lock); + list_add_tail(&client->node, &xdna->client_list); + mutex_unlock(&xdna->dev_lock); + + filp->driver_priv =3D client; + client->filp =3D filp; + + XDNA_DBG(xdna, "pid %d opened", client->pid); + return 0; + +unbind_sva: + iommu_sva_unbind_device(client->sva); +failed: + kfree(client); + + return ret; +} + +static void amdxdna_drm_close(struct drm_device *ddev, struct drm_file *fi= lp) +{ + struct amdxdna_client *client =3D filp->driver_priv; + struct amdxdna_dev *xdna =3D to_xdna_dev(ddev); + + XDNA_DBG(xdna, "closing pid %d", client->pid); + + idr_destroy(&client->hwctx_idr); + mutex_destroy(&client->hwctx_lock); + + iommu_sva_unbind_device(client->sva); + + XDNA_DBG(xdna, "pid %d closed", client->pid); + kfree(client); +} + +static int amdxdna_flush(struct file *f, fl_owner_t id) +{ + struct drm_file *filp =3D f->private_data; + struct amdxdna_client *client =3D filp->driver_priv; + struct amdxdna_dev *xdna =3D client->xdna; + int idx; + + XDNA_DBG(xdna, "PID %d flushing...", client->pid); + if (!drm_dev_enter(&xdna->ddev, &idx)) + return 0; + + mutex_lock(&xdna->dev_lock); + list_del_init(&client->node); + mutex_unlock(&xdna->dev_lock); + amdxdna_hwctx_remove_all(client); + + drm_dev_exit(idx); + return 0; +} + +static const struct drm_ioctl_desc amdxdna_drm_ioctls[] =3D { + /* Context */ + DRM_IOCTL_DEF_DRV(AMDXDNA_CREATE_HWCTX, amdxdna_drm_create_hwctx_ioctl, 0= ), + DRM_IOCTL_DEF_DRV(AMDXDNA_DESTROY_HWCTX, amdxdna_drm_destroy_hwctx_ioctl,= 0), + DRM_IOCTL_DEF_DRV(AMDXDNA_CONFIG_HWCTX, amdxdna_drm_config_hwctx_ioctl, 0= ), +}; + +static const struct file_operations amdxdna_fops =3D { + .owner =3D THIS_MODULE, + .open =3D accel_open, + .release =3D drm_release, + .flush =3D amdxdna_flush, + .unlocked_ioctl =3D drm_ioctl, + .compat_ioctl =3D drm_compat_ioctl, + .poll =3D drm_poll, + .read =3D drm_read, + .llseek =3D noop_llseek, + .mmap =3D drm_gem_mmap, + .fop_flags =3D FOP_UNSIGNED_OFFSET, +}; =20 const struct drm_driver amdxdna_drm_drv =3D { .driver_features =3D DRIVER_GEM | DRIVER_COMPUTE_ACCEL | @@ -41,6 +145,10 @@ const struct drm_driver amdxdna_drm_drv =3D { .fops =3D &amdxdna_fops, .name =3D "amdxdna_accel_driver", .desc =3D "AMD XDNA DRM implementation", + .open =3D amdxdna_drm_open, + .postclose =3D amdxdna_drm_close, + .ioctls =3D amdxdna_drm_ioctls, + .num_ioctls =3D ARRAY_SIZE(amdxdna_drm_ioctls), }; =20 static const struct amdxdna_dev_info * @@ -70,6 +178,7 @@ static int amdxdna_probe(struct pci_dev *pdev, const str= uct pci_device_id *id) return -ENODEV; =20 drmm_mutex_init(&xdna->ddev, &xdna->dev_lock); + INIT_LIST_HEAD(&xdna->client_list); pci_set_drvdata(pdev, xdna); =20 mutex_lock(&xdna->dev_lock); @@ -106,11 +215,25 @@ static int amdxdna_probe(struct pci_dev *pdev, const = struct pci_device_id *id) static void amdxdna_remove(struct pci_dev *pdev) { struct amdxdna_dev *xdna =3D pci_get_drvdata(pdev); + struct amdxdna_client *client; =20 drm_dev_unplug(&xdna->ddev); amdxdna_sysfs_fini(xdna); =20 mutex_lock(&xdna->dev_lock); + client =3D list_first_entry_or_null(&xdna->client_list, + struct amdxdna_client, node); + while (client) { + list_del_init(&client->node); + mutex_unlock(&xdna->dev_lock); + + amdxdna_hwctx_remove_all(client); + + mutex_lock(&xdna->dev_lock); + client =3D list_first_entry_or_null(&xdna->client_list, + struct amdxdna_client, node); + } + xdna->dev_info->ops->fini(xdna); mutex_unlock(&xdna->dev_lock); } diff --git a/drivers/accel/amdxdna/amdxdna_pci_drv.h b/drivers/accel/amdxdn= a/amdxdna_pci_drv.h index c0710d3130fd..5ec7fe168406 100644 --- a/drivers/accel/amdxdna/amdxdna_pci_drv.h +++ b/drivers/accel/amdxdna/amdxdna_pci_drv.h @@ -18,6 +18,7 @@ extern const struct drm_driver amdxdna_drm_drv; =20 struct amdxdna_dev; +struct amdxdna_hwctx; =20 /* * struct amdxdna_dev_ops - Device hardware operation callbacks @@ -25,6 +26,9 @@ struct amdxdna_dev; struct amdxdna_dev_ops { int (*init)(struct amdxdna_dev *xdna); void (*fini)(struct amdxdna_dev *xdna); + int (*hwctx_init)(struct amdxdna_hwctx *hwctx); + void (*hwctx_fini)(struct amdxdna_hwctx *hwctx); + int (*hwctx_config)(struct amdxdna_hwctx *hwctx, u32 type, u64 value, voi= d *buf, u32 size); }; =20 /* @@ -61,6 +65,7 @@ struct amdxdna_dev { void *xrs_hdl; =20 struct mutex dev_lock; /* per device lock */ + struct list_head client_list; struct amdxdna_fw_ver fw_ver; }; =20 @@ -73,6 +78,21 @@ struct amdxdna_device_id { const struct amdxdna_dev_info *dev_info; }; =20 +/* + * struct amdxdna_client - amdxdna client + * A per fd data structure for managing context and other user process stu= ffs. + */ +struct amdxdna_client { + struct list_head node; + pid_t pid; + struct mutex hwctx_lock; /* protect hwctx */ + struct idr hwctx_idr; + struct amdxdna_dev *xdna; + struct drm_file *filp; + struct iommu_sva *sva; + int pasid; +}; + /* Add device info below */ extern const struct amdxdna_dev_info dev_npu1_info; extern const struct amdxdna_dev_info dev_npu2_info; diff --git a/include/uapi/drm/amdxdna_accel.h b/include/uapi/drm/amdxdna_ac= cel.h index 6d97e8e90cf6..a0dc821c1363 100644 --- a/include/uapi/drm/amdxdna_accel.h +++ b/include/uapi/drm/amdxdna_accel.h @@ -6,17 +6,148 @@ #ifndef _UAPI_AMDXDNA_ACCEL_H_ #define _UAPI_AMDXDNA_ACCEL_H_ =20 +#include #include "drm.h" =20 #if defined(__cplusplus) extern "C" { #endif =20 +#define AMDXDNA_INVALID_CTX_HANDLE 0 + enum amdxdna_device_type { AMDXDNA_DEV_TYPE_UNKNOWN =3D -1, AMDXDNA_DEV_TYPE_KMQ, }; =20 +enum amdxdna_drm_ioctl_id { + DRM_AMDXDNA_CREATE_HWCTX, + DRM_AMDXDNA_DESTROY_HWCTX, + DRM_AMDXDNA_CONFIG_HWCTX, +}; + +/** + * struct qos_info - QoS information for driver. + * @gops: Giga operations per second. + * @fps: Frames per second. + * @dma_bandwidth: DMA bandwidtha. + * @latency: Frame response latency. + * @frame_exec_time: Frame execution time. + * @priority: Request priority. + * + * User program can provide QoS hints to driver. + */ +struct amdxdna_qos_info { + __u32 gops; + __u32 fps; + __u32 dma_bandwidth; + __u32 latency; + __u32 frame_exec_time; + __u32 priority; +}; + +/** + * struct amdxdna_drm_create_hwctx - Create hardware context. + * @ext: MBZ. + * @ext_flags: MBZ. + * @qos_p: Address of QoS info. + * @umq_bo: BO handle for user mode queue(UMQ). + * @log_buf_bo: BO handle for log buffer. + * @max_opc: Maximum operations per cycle. + * @num_tiles: Number of AIE tiles. + * @mem_size: Size of AIE tile memory. + * @umq_doorbell: Returned offset of doorbell associated with UMQ. + * @handle: Returned hardware context handle. + * @syncobj_handle: Returned syncobj handle for command completion. + */ +struct amdxdna_drm_create_hwctx { + __u64 ext; + __u64 ext_flags; + __u64 qos_p; + __u32 umq_bo; + __u32 log_buf_bo; + __u32 max_opc; + __u32 num_tiles; + __u32 mem_size; + __u32 umq_doorbell; + __u32 handle; + __u32 syncobj_handle; +}; + +/** + * struct amdxdna_drm_destroy_hwctx - Destroy hardware context. + * @handle: Hardware context handle. + * @pad: Structure padding. + */ +struct amdxdna_drm_destroy_hwctx { + __u32 handle; + __u32 pad; +}; + +/** + * struct amdxdna_cu_config - configuration for one CU + * @cu_bo: CU configuration buffer bo handle. + * @cu_func: Function of a CU. + * @pad: Structure padding. + */ +struct amdxdna_cu_config { + __u32 cu_bo; + __u8 cu_func; + __u8 pad[3]; +}; + +/** + * struct amdxdna_hwctx_param_config_cu - configuration for CUs in hardwar= e context + * @num_cus: Number of CUs to configure. + * @pad: Structure padding. + * @cu_configs: Array of CU configurations of struct amdxdna_cu_config. + */ +struct amdxdna_hwctx_param_config_cu { + __u16 num_cus; + __u16 pad[3]; + struct amdxdna_cu_config cu_configs[] __counted_by(num_cus); +}; + +enum amdxdna_drm_config_hwctx_param { + DRM_AMDXDNA_HWCTX_CONFIG_CU, + DRM_AMDXDNA_HWCTX_ASSIGN_DBG_BUF, + DRM_AMDXDNA_HWCTX_REMOVE_DBG_BUF, + DRM_AMDXDNA_HWCTX_CONFIG_NUM +}; + +/** + * struct amdxdna_drm_config_hwctx - Configure hardware context. + * @handle: hardware context handle. + * @param_type: Value in enum amdxdna_drm_config_hwctx_param. Specifies the + * structure passed in via param_val. + * @param_val: A structure specified by the param_type struct member. + * @param_val_size: Size of the parameter buffer pointed to by the param_v= al. + * If param_val is not a pointer, driver can ignore this. + * @pad: Structure padding. + * + * Note: if the param_val is a pointer pointing to a buffer, the maximum s= ize + * of the buffer is 4KiB(PAGE_SIZE). + */ +struct amdxdna_drm_config_hwctx { + __u32 handle; + __u32 param_type; + __u64 param_val; + __u32 param_val_size; + __u32 pad; +}; + +#define DRM_IOCTL_AMDXDNA_CREATE_HWCTX \ + DRM_IOWR(DRM_COMMAND_BASE + DRM_AMDXDNA_CREATE_HWCTX, \ + struct amdxdna_drm_create_hwctx) + +#define DRM_IOCTL_AMDXDNA_DESTROY_HWCTX \ + DRM_IOWR(DRM_COMMAND_BASE + DRM_AMDXDNA_DESTROY_HWCTX, \ + struct amdxdna_drm_destroy_hwctx) + +#define DRM_IOCTL_AMDXDNA_CONFIG_HWCTX \ + DRM_IOWR(DRM_COMMAND_BASE + DRM_AMDXDNA_CONFIG_HWCTX, \ + struct amdxdna_drm_config_hwctx) + #if defined(__cplusplus) } /* extern c end */ #endif --=20 2.34.1