From nobody Sun Dec 14 12:17:30 2025 Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5E5611A3141 for ; Thu, 22 May 2025 05:44:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=67.231.156.173 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747892698; cv=none; b=lxaywq3zgQTSUBWhLFKVYRUtxBncHxCFTq1GE8c/i9qoIIhqcvKaPb/xZJADS9azII6xDB5Awf1XZB4NlBAi7jXT9cQntE0I3AUIs7OJd37f046JcmEg4LkhPw6Me9lfnmKOJ0oaMFQqRRdUx5xkvgqhdNmOULzPFTpXBjxF9dI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747892698; c=relaxed/simple; bh=aZidg7SYInJCjCz3q3GTVLsSUHfSuFHCvCQzSKj9T0A=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=i0PVnASOdLuLj8W5nlog7s5HDcaMVGQFENkhBf4CGu/wC/uj3GtiXys9dnTS2uKLU185eLj5KamspQ3OeJ/APxaOUTpxn/T0Xty3Gd3g+3uusssyz7WdZDaQh+6hCyhS4MCQjxknxq22o/StUdL/h4ZK4OaOexr4z9eOTtY+0bI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com; spf=pass smtp.mailfrom=marvell.com; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b=VFkOXwE1; arc=none smtp.client-ip=67.231.156.173 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=marvell.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="VFkOXwE1" Received: from pps.filterd (m0431383.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 54M5JgrK013402; Wed, 21 May 2025 22:44:46 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=pfpt0220; bh=C gfE8nqHc27PkRlaGbXILvcxUsbWFo082K+FOdi6djo=; b=VFkOXwE1SA1gyHF/k QUoAhRfDnoYrOA+5xHPcIgEt2UrERgix+6u/c6xiaIjBu7yevrh03ahIkPvV5XL+ Wz+F/EGSWBhUcMVxwDKZxgh7nEps8lCXHFfB2KjLKZw8RRdhHUcI81rbessuBPP+ SrvqQAgXRTYzRkj/riNXZicFURAyR9XAxcMrpOtlzrZrVLdlOUIG5pXPFGzYcHX0 r76nwiXaCEsmxmf2Kti716i8nhtf7BlQupekXSvWC1s+f1wDRBh1jQnzBucNC7ha IZGNeIWJJyNdtTVubbSGeK2ZdK0j8NNMR4GF+MIy9aWswdNEg0tuXWaVxSfXEPPJ zuR+g== Received: from dc6wp-exch02.marvell.com ([4.21.29.225]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 46swp68100-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 21 May 2025 22:44:45 -0700 (PDT) Received: from DC6WP-EXCH02.marvell.com (10.76.176.209) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Wed, 21 May 2025 22:44:45 -0700 Received: from maili.marvell.com (10.69.176.80) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Wed, 21 May 2025 22:44:45 -0700 Received: from tx2-sever.caveonetworks.com (unknown [10.110.141.15]) by maili.marvell.com (Postfix) with ESMTP id ACA815B694C; Wed, 21 May 2025 22:44:44 -0700 (PDT) From: George Cherian To: , , , , CC: , Subject: [PATCH v6 1/4] soc: marvell: Add a general purpose RVU PF driver Date: Thu, 22 May 2025 05:44:41 +0000 Message-ID: <20250522054444.3531124-2-george.cherian@marvell.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250522054444.3531124-1-george.cherian@marvell.com> References: <20250522054444.3531124-1-george.cherian@marvell.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Proofpoint-GUID: ExALiI_3VDpBYn-3VYe0VxOsIQ--VcT9 X-Authority-Analysis: v=2.4 cv=DO+P4zNb c=1 sm=1 tr=0 ts=682eb9cd cx=c_pps a=gIfcoYsirJbf48DBMSPrZA==:117 a=gIfcoYsirJbf48DBMSPrZA==:17 a=IkcTkHD0fZMA:10 a=dt9VzEwgFbYA:10 a=M5GUcnROAAAA:8 a=pGLkceISAAAA:8 a=y3-j8PGK2oU8MfrTlNwA:9 a=3ZKOabzyN94A:10 a=QEXdDO2ut3YA:10 a=OBjm3rFKGHvpk9ecZwUJ:22 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNTIyMDA1NiBTYWx0ZWRfX6A0jKHgyP4Dd weyEiHyl8NvMjYoiiqYHH8vezsSv4goqy4jnZRN/F30C9EBQMEc3Md+VGhRKOa0D6wbRzw/QaX2 LiGbJ4Ha9jXz8MYUnqDc44BvkQ5/IP+YuyY56tVnPGWvNDwVBFcQxBlcnzo9Ihl+TehFsO84j35 /T7pqtrrezqmlW4VvocvLMrrlKSGSVBns+t4bCLOmvhqPbHdWlYx4L+8dlB2VujC+L7tezsrK95 LEOJXSDFYCc7Pcsch4Kqgn/wgaaMh33mU1WOtNfYJFnEbG2chgGyYreSdylNUUvEbSuwoXUtIgQ QGaPVLIGEhKlydO+wpgZfMzfvSwPF7i972FsyyagbNLWejChilXNs6hMRytVsK2dak9+xxGJL2J 2WRGQjHpecFJuBQd7XRI84zmF/nc9Ofp2n1snzvybjoOenhBZ60Vasth0sZDBMn9sR7s42IH X-Proofpoint-ORIG-GUID: ExALiI_3VDpBYn-3VYe0VxOsIQ--VcT9 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-05-22_03,2025-05-20_03,2025-03-28_01 From: Anshumali Gaur Resource virtualization unit (RVU) on Marvell's Octeon series of silicons maps HW resources from the network, crypto and other functional blocks into PCI-compatible physical and virtual functions. Each functional block again has multiple local functions (LFs) for provisioning to PCI devices. RVU supports multiple PCIe SRIOV physical functions (PFs) and virtual functions (VFs). And RVU admin function (AF) is the one which manages all the resources (local functions etc) in the system. Functionality of these PFs and VFs depends on which block LFs are attached to them. Depending on usecase some PFs might support IO (ie LFs attached) and some may not. For the usecases where PF doesn't (need to) support IO, PF's driver will be limited to below functionality. 1. Creating and destroying of PCIe SRIOV VFs 2. Support mailbox communication between VFs and admin function (RVU AF) 3. PCIe Function level reset (FLR) for VFs For such PFs this patch series adds a general purpose driver which supports above functionality. This will avoid duplicating same functionality for different RVU PFs. This patch adds basic stub PF driver with PCI device init logic and SRIOV enable/disable support. Signed-off-by: Anshumali Gaur Signed-off-by: George Cherian Reviewed-by: Alexander Sverdlin --- drivers/soc/Kconfig | 1 + drivers/soc/Makefile | 1 + drivers/soc/marvell/Kconfig | 19 +++ drivers/soc/marvell/Makefile | 2 + drivers/soc/marvell/rvu_gen_pf/Makefile | 5 + drivers/soc/marvell/rvu_gen_pf/gen_pf.c | 160 ++++++++++++++++++++++++ drivers/soc/marvell/rvu_gen_pf/gen_pf.h | 19 +++ 7 files changed, 207 insertions(+) create mode 100644 drivers/soc/marvell/Kconfig create mode 100644 drivers/soc/marvell/Makefile create mode 100644 drivers/soc/marvell/rvu_gen_pf/Makefile create mode 100644 drivers/soc/marvell/rvu_gen_pf/gen_pf.c create mode 100644 drivers/soc/marvell/rvu_gen_pf/gen_pf.h diff --git a/drivers/soc/Kconfig b/drivers/soc/Kconfig index 6a8daeb8c4b9..a5d3770a6acf 100644 --- a/drivers/soc/Kconfig +++ b/drivers/soc/Kconfig @@ -15,6 +15,7 @@ source "drivers/soc/imx/Kconfig" source "drivers/soc/ixp4xx/Kconfig" source "drivers/soc/litex/Kconfig" source "drivers/soc/loongson/Kconfig" +source "drivers/soc/marvell/Kconfig" source "drivers/soc/mediatek/Kconfig" source "drivers/soc/microchip/Kconfig" source "drivers/soc/nuvoton/Kconfig" diff --git a/drivers/soc/Makefile b/drivers/soc/Makefile index 2037a8695cb2..b20ec6071302 100644 --- a/drivers/soc/Makefile +++ b/drivers/soc/Makefile @@ -20,6 +20,7 @@ obj-y +=3D ixp4xx/ obj-$(CONFIG_SOC_XWAY) +=3D lantiq/ obj-$(CONFIG_LITEX_SOC_CONTROLLER) +=3D litex/ obj-y +=3D loongson/ +obj-y +=3D marvell/ obj-y +=3D mediatek/ obj-y +=3D microchip/ obj-y +=3D nuvoton/ diff --git a/drivers/soc/marvell/Kconfig b/drivers/soc/marvell/Kconfig new file mode 100644 index 000000000000..b55d3bbfaf2a --- /dev/null +++ b/drivers/soc/marvell/Kconfig @@ -0,0 +1,19 @@ +# SPDX-License-Identifier: GPL-2.0-only +# +# MARVELL SoC drivers +# + +menu "Marvell SoC drivers" + +config MARVELL_OCTEON_RVU_GEN_PF + tristate "Marvell Octeon RVU Generic PF Driver" + depends on ARM64 && PCI && OCTEONTX2_AF + default n + help + This driver is used to create and destroy PCIe SRIOV VFs of the + RVU PFs that doesn't need to support any I/O functionality. It also + enables VFs to communicate with RVU admin function (AF) & handles + PCIe FLR for VFs. + + Say =E2=80=98Yes=E2=80=99 to this driver if you have such a RVU PF device. +endmenu diff --git a/drivers/soc/marvell/Makefile b/drivers/soc/marvell/Makefile new file mode 100644 index 000000000000..9a6917393873 --- /dev/null +++ b/drivers/soc/marvell/Makefile @@ -0,0 +1,2 @@ +# SPDX-License-Identifier: GPL-2.0 +obj-$(CONFIG_MARVELL_OCTEON_RVU_GEN_PF) +=3D rvu_gen_pf/ diff --git a/drivers/soc/marvell/rvu_gen_pf/Makefile b/drivers/soc/marvell/= rvu_gen_pf/Makefile new file mode 100644 index 000000000000..6c3d2568942b --- /dev/null +++ b/drivers/soc/marvell/rvu_gen_pf/Makefile @@ -0,0 +1,5 @@ +# +# Makefile for Marvell's Octeon RVU GENERIC PF driver +# +obj-$(CONFIG_MARVELL_OCTEON_RVU_GEN_PF) +=3D gen_pf.o +ccflags-y +=3D -I$(srctree)/drivers/net/ethernet/marvell/octeontx2/af diff --git a/drivers/soc/marvell/rvu_gen_pf/gen_pf.c b/drivers/soc/marvell/= rvu_gen_pf/gen_pf.c new file mode 100644 index 000000000000..6437916cb6d7 --- /dev/null +++ b/drivers/soc/marvell/rvu_gen_pf/gen_pf.c @@ -0,0 +1,160 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Marvell Octeon RVU Generic Physical Function driver + * + * Copyright (C) 2024 Marvell. + * + */ +#include +#include +#include +#include +#include +#include +#include + +#include "gen_pf.h" +#include +#include + +#define DRV_NAME "rvu_generic_pf" + +/* Supported devices */ +static const struct pci_device_id rvu_gen_pf_id_table[] =3D { + { PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, 0xA0F6) }, + { } /* end of table */ +}; +MODULE_LICENSE("GPL"); +MODULE_DESCRIPTION("Marvell Octeon RVU Generic PF Driver"); +MODULE_DEVICE_TABLE(pci, rvu_gen_pf_id_table); + +static int rvu_gen_pf_check_pf_usable(struct gen_pf_dev *pfdev) +{ + u64 rev; + + rev =3D readq(pfdev->reg_base + RVU_PF_BLOCK_ADDRX_DISC(BLKADDR_RVUM)); + rev =3D (rev >> 12) & 0xFF; + /* + * Check if AF has setup revision for RVUM block, + * otherwise this driver probe should be deferred + * until AF driver comes up. + */ + if (!rev) { + dev_warn(pfdev->dev, + "AF is not initialized, deferring probe\n"); + return -EPROBE_DEFER; + } + return 0; +} + +static int rvu_gen_pf_sriov_enable(struct pci_dev *pdev, int numvfs) +{ + int ret; + + ret =3D pci_enable_sriov(pdev, numvfs); + if (ret) + return ret; + + return numvfs; +} + +static int rvu_gen_pf_sriov_disable(struct pci_dev *pdev) +{ + int numvfs =3D pci_num_vf(pdev); + + if (!numvfs) + return 0; + + pci_disable_sriov(pdev); + + return 0; +} + +static int rvu_gen_pf_sriov_configure(struct pci_dev *pdev, int numvfs) +{ + if (numvfs =3D=3D 0) + return rvu_gen_pf_sriov_disable(pdev); + + return rvu_gen_pf_sriov_enable(pdev, numvfs); +} + +static void rvu_gen_pf_remove(struct pci_dev *pdev) +{ + struct gen_pf_dev *pfdev =3D pci_get_drvdata(pdev); + + rvu_gen_pf_sriov_disable(pfdev->pdev); + pci_set_drvdata(pdev, NULL); + + pci_release_regions(pdev); +} + +static int rvu_gen_pf_probe(struct pci_dev *pdev, const struct pci_device_= id *id) +{ + struct device *dev =3D &pdev->dev; + struct gen_pf_dev *pfdev; + int err; + + err =3D pcim_enable_device(pdev); + if (err) { + dev_err(dev, "Failed to enable PCI device\n"); + return err; + } + + err =3D pci_request_regions(pdev, DRV_NAME); + if (err) { + dev_err(dev, "PCI request regions failed %d\n", err); + goto err_map_failed; + } + + err =3D dma_set_mask_and_coherent(dev, DMA_BIT_MASK(48)); + if (err) { + dev_err(dev, "DMA mask config failed, abort\n"); + goto err_release_regions; + } + + pci_set_master(pdev); + + pfdev =3D devm_kzalloc(dev, sizeof(struct gen_pf_dev), GFP_KERNEL); + if (!pfdev) { + err =3D -ENOMEM; + goto err_release_regions; + } + + pci_set_drvdata(pdev, pfdev); + pfdev->pdev =3D pdev; + pfdev->dev =3D dev; + pfdev->total_vfs =3D pci_sriov_get_totalvfs(pdev); + + err =3D rvu_gen_pf_check_pf_usable(pfdev); + if (err) + goto err_release_regions; + + return 0; + +err_release_regions: + pci_release_regions(pdev); + pci_set_drvdata(pdev, NULL); +err_map_failed: + pci_disable_device(pdev); + return err; +} + +static struct pci_driver rvu_gen_driver =3D { + .name =3D DRV_NAME, + .id_table =3D rvu_gen_pf_id_table, + .probe =3D rvu_gen_pf_probe, + .remove =3D rvu_gen_pf_remove, + .sriov_configure =3D rvu_gen_pf_sriov_configure, +}; + +static int __init rvu_gen_pf_init_module(void) +{ + return pci_register_driver(&rvu_gen_driver); +} + +static void __exit rvu_gen_pf_cleanup_module(void) +{ + pci_unregister_driver(&rvu_gen_driver); +} + +module_init(rvu_gen_pf_init_module); +module_exit(rvu_gen_pf_cleanup_module); diff --git a/drivers/soc/marvell/rvu_gen_pf/gen_pf.h b/drivers/soc/marvell/= rvu_gen_pf/gen_pf.h new file mode 100644 index 000000000000..d89b674b1a0f --- /dev/null +++ b/drivers/soc/marvell/rvu_gen_pf/gen_pf.h @@ -0,0 +1,19 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Marvell Octeon RVU Generic Physical Function driver + * + * Copyright (C) 2024 Marvell. + */ +#include +#include + +#define RVU_PFFUNC(pf, func) \ + ((((pf) & RVU_PFVF_PF_MASK) << RVU_PFVF_PF_SHIFT) | \ + (((func) & RVU_PFVF_FUNC_MASK) << RVU_PFVF_FUNC_SHIFT)) + +struct gen_pf_dev { + struct pci_dev *pdev; + struct device *dev; + void __iomem *reg_base; + int pf; + u8 total_vfs; +}; --=20 2.34.1 From nobody Sun Dec 14 12:17:30 2025 Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6DE4C1A3A8D for ; Thu, 22 May 2025 05:44:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=67.231.156.173 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747892698; cv=none; b=Uc5/Jf1EnSEPjpeTf2NC5CSiRWqY+OaelnvE7O99haZTFyk88zO7hxBAVY4zRB4c03uXhBpgW2/qa9pbTNLbYNSeZ7HVaIsLtsJEz9OK7sDmjNv21rlrDH4FVLuroV1p0GfIfPr5K7DSVkLSHr6vTVTQC9y1ORV3nhvf5SKri98= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747892698; c=relaxed/simple; bh=aIPmYCuNeSl1lIZkxMWBgTo5kWuSyrVZK7/hMZEJT0g=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=XZ15DJkHwtcuxWLCnpNdLq2X90pVE3odpnIIQNSRAN6o2Yly655OtdKnBinhidWvi4EvaRI9pNWUDSQR+Cp3D8aq+shR93qMg1Ti8oSasHBx/6h+qCvTAs3/8z12oRY2YLMHs6gOkiGWIDTipQCWNZ8/whObSg4BsJW+JRswKpM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com; spf=pass smtp.mailfrom=marvell.com; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b=SBGMFseR; arc=none smtp.client-ip=67.231.156.173 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=marvell.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="SBGMFseR" Received: from pps.filterd (m0431383.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 54M5JgrL013402; Wed, 21 May 2025 22:44:46 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=pfpt0220; bh=a PSmNWLG1ZAZWOFTbvTZ1iFjyswCtrnwz+UwFwdN3Lw=; b=SBGMFseRwplcS6mwb 4KX8dBiHzKeKMNLZjXJi2WoGWEp8dKFR2/HKmYFPyI3TBDtxmiwRiVGU34v47bWl 4hzucd9oMpHdkYzi4WJnsPcCcArRfdqvtRGJHlDJPA4EsMkkaq6vDPTRlficHD2e aCIpqEJ/vTY+V+WsyK7E0vGuSWGqNiyoyfviw6RdfWCEky1ZYrEAH3zqUUIGmp6d DCQCtrcVNlvFVL9mMiEewTscc5e8mdVT9Z2eOTp2e/DadwsYL6+Q3qvIoedbLtwO kYeyAeVeb+EZh+YNwHRkj3S8NZC7vRGbp5Yr01C2+XgJ4fxZqyfHcQzTXKjyuBqk 1hcpg== Received: from dc6wp-exch02.marvell.com ([4.21.29.225]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 46swp68100-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 21 May 2025 22:44:46 -0700 (PDT) Received: from DC6WP-EXCH02.marvell.com (10.76.176.209) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Wed, 21 May 2025 22:44:45 -0700 Received: from maili.marvell.com (10.69.176.80) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Wed, 21 May 2025 22:44:45 -0700 Received: from tx2-sever.caveonetworks.com (unknown [10.110.141.15]) by maili.marvell.com (Postfix) with ESMTP id D757F5B694B; Wed, 21 May 2025 22:44:44 -0700 (PDT) From: George Cherian To: , , , , CC: , Subject: [PATCH v6 2/4] soc: marvell: rvu-pf: Add PF to AF mailbox communication support. Date: Thu, 22 May 2025 05:44:42 +0000 Message-ID: <20250522054444.3531124-3-george.cherian@marvell.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250522054444.3531124-1-george.cherian@marvell.com> References: <20250522054444.3531124-1-george.cherian@marvell.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-GUID: uH8XMo5U_1v90L3ptzoR-s1sjXAFe3vj X-Authority-Analysis: v=2.4 cv=DO+P4zNb c=1 sm=1 tr=0 ts=682eb9ce cx=c_pps a=gIfcoYsirJbf48DBMSPrZA==:117 a=gIfcoYsirJbf48DBMSPrZA==:17 a=dt9VzEwgFbYA:10 a=M5GUcnROAAAA:8 a=pGLkceISAAAA:8 a=tXDIYXHy3PERTAO--iUA:9 a=OBjm3rFKGHvpk9ecZwUJ:22 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNTIyMDA1NiBTYWx0ZWRfX2U14j+zXVLDH cND0+RBh6ER1hGGTrg2OS5S8HAwTIIXd1BPnnDtLrwx+ouiecG67TdgJzHFYs0B3jxp6/MRNEwD proS67aTPE0oaN3B18x+adbXoOHDxJqfVcsDs41jqO8wa84/HySaRBMbIvevAn0fqdv3euPVJn7 nbxOsWD68wP1q9qEByqgjwHF2FcBRo84XviBf5nFck624n7GvCN7YJaeZ2+f+iv8HD4VXNfo5wC KX6UfSUxIopIExol/eocDM77dh8q6FAdi22DIie6j4sycXwMpuiAF8pQUYb1CBI9GLUMYYSeGWo Iuh5F85mhQ7PY0tcyG0n5lh++Y0/Y+Q0BkCb2VzMpv+OFmodHqxgqO6bNkeNVm46vggNS/kPx/l eoGa46OYM6ebOVJtkoQAjFBiB1t15ljDZzqEINOlPV17buIqB0ixRODgR4cFDKUfM32FEqIw X-Proofpoint-ORIG-GUID: uH8XMo5U_1v90L3ptzoR-s1sjXAFe3vj X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-05-22_03,2025-05-20_03,2025-03-28_01 Content-Type: text/plain; charset="utf-8" From: Anshumali Gaur Resource provisioning for virtual functions (VFs) is done by RVU admin function (AF). RVU PF and AF shares a memory region which can be used for communication. This patch adds support for mailbox communication between PF and AF, notification of messages is via IRQs. Example mailbox messages types and structures can be found at drivers/net/ethernet/marvell/octeontx2/af/mbox.h Signed-off-by: Anshumali Gaur Signed-off-by: George Cherian Reviewed-by: Alexander Sverdlin --- drivers/soc/marvell/rvu_gen_pf/gen_pf.c | 265 +++++++++++++++++++++++- drivers/soc/marvell/rvu_gen_pf/gen_pf.h | 124 +++++++++++ 2 files changed, 388 insertions(+), 1 deletion(-) diff --git a/drivers/soc/marvell/rvu_gen_pf/gen_pf.c b/drivers/soc/marvell/= rvu_gen_pf/gen_pf.c index 6437916cb6d7..a03fc3f16c69 100644 --- a/drivers/soc/marvell/rvu_gen_pf/gen_pf.c +++ b/drivers/soc/marvell/rvu_gen_pf/gen_pf.c @@ -16,6 +16,10 @@ #include #include =20 + /* PCI BAR nos */ +#define PCI_CFG_REG_BAR_NUM 2 +#define PCI_MBOX_BAR_NUM 4 + #define DRV_NAME "rvu_generic_pf" =20 /* Supported devices */ @@ -46,6 +50,230 @@ static int rvu_gen_pf_check_pf_usable(struct gen_pf_dev= *pfdev) return 0; } =20 +static irqreturn_t rvu_gen_pf_pfaf_mbox_intr_handler(int irq, void *pf_irq) +{ + struct gen_pf_dev *pfdev =3D (struct gen_pf_dev *)pf_irq; + struct mbox *mw =3D &pfdev->mbox; + struct otx2_mbox_dev *mdev; + struct otx2_mbox *mbox; + struct mbox_hdr *hdr; + u64 mbox_data; + + /* Clear the IRQ */ + writeq(BIT_ULL(0), pfdev->reg_base + RVU_PF_INT); + + mbox_data =3D readq(pfdev->reg_base + RVU_PF_PFAF_MBOX0); + + if (mbox_data & MBOX_UP_MSG) { + mbox_data &=3D ~MBOX_UP_MSG; + writeq(mbox_data, pfdev->reg_base + RVU_PF_PFAF_MBOX0); + + mbox =3D &mw->mbox_up; + mdev =3D &mbox->dev[0]; + otx2_sync_mbox_bbuf(mbox, 0); + + hdr =3D (struct mbox_hdr *)(mdev->mbase + mbox->rx_start); + if (hdr->num_msgs) + queue_work(pfdev->mbox_wq, &mw->mbox_up_wrk); + + trace_otx2_msg_interrupt(pfdev->pdev, "UP message from AF to PF", + BIT_ULL(0)); + } + + if (mbox_data & MBOX_DOWN_MSG) { + mbox_data &=3D ~MBOX_DOWN_MSG; + writeq(mbox_data, pfdev->reg_base + RVU_PF_PFAF_MBOX0); + + mbox =3D &mw->mbox; + mdev =3D &mbox->dev[0]; + otx2_sync_mbox_bbuf(mbox, 0); + + hdr =3D (struct mbox_hdr *)(mdev->mbase + mbox->rx_start); + if (hdr->num_msgs) + queue_work(pfdev->mbox_wq, &mw->mbox_wrk); + + trace_otx2_msg_interrupt(pfdev->pdev, "DOWN reply from AF to PF", + BIT_ULL(0)); + } + return IRQ_HANDLED; +} + +static void rvu_gen_pf_disable_mbox_intr(struct gen_pf_dev *pfdev) +{ + int vector =3D pci_irq_vector(pfdev->pdev, RVU_PF_INT_VEC_AFPF_MBOX); + + /* Disable AF =3D> PF mailbox IRQ */ + writeq(BIT_ULL(0), pfdev->reg_base + RVU_PF_INT_ENA_W1C); + free_irq(vector, pfdev); +} + +static int rvu_gen_pf_register_mbox_intr(struct gen_pf_dev *pfdev) +{ + struct msg_req *req; + char *irq_name; + int err; + + /* Register mailbox interrupt handler */ + irq_name =3D &pfdev->irq_name[RVU_PF_INT_VEC_AFPF_MBOX * NAME_SIZE]; + snprintf(irq_name, NAME_SIZE, "Generic RVUPFAF Mbox"); + err =3D request_irq(pci_irq_vector(pfdev->pdev, RVU_PF_INT_VEC_AFPF_MBOX), + rvu_gen_pf_pfaf_mbox_intr_handler, 0, irq_name, pfdev); + if (err) { + dev_err(pfdev->dev, + "GenPF: IRQ registration failed for PFAF mbox irq\n"); + return err; + } + + /* + * Enable mailbox interrupt for msgs coming from AF. + * First clear to avoid spurious interrupts, if any. + */ + writeq(BIT_ULL(0), pfdev->reg_base + RVU_PF_INT); + writeq(BIT_ULL(0), pfdev->reg_base + RVU_PF_INT_ENA_W1S); + + /* Check mailbox communication with AF */ + req =3D gen_pf_mbox_alloc_msg_ready(&pfdev->mbox); + if (!req) { + rvu_gen_pf_disable_mbox_intr(pfdev); + return -ENOMEM; + } + err =3D rvu_gen_pf_sync_mbox_msg(&pfdev->mbox); + if (err) { + dev_warn(pfdev->dev, + "AF not responding to mailbox, deferring probe\n"); + rvu_gen_pf_disable_mbox_intr(pfdev); + return -EPROBE_DEFER; + } + return 0; +} + +static void rvu_gen_pf_pfaf_mbox_destroy(struct gen_pf_dev *pfdev) +{ + struct mbox *mbox =3D &pfdev->mbox; + + if (pfdev->mbox_wq) { + destroy_workqueue(pfdev->mbox_wq); + pfdev->mbox_wq =3D NULL; + } + + if (mbox->mbox.hwbase) + iounmap((void __iomem *)mbox->mbox.hwbase); + + otx2_mbox_destroy(&mbox->mbox); + otx2_mbox_destroy(&mbox->mbox_up); +} + +static void rvu_gen_pf_process_pfaf_mbox_msg(struct gen_pf_dev *pfdev, + struct mbox_msghdr *msg) +{ + if (msg->id >=3D MBOX_MSG_MAX) { + dev_err(pfdev->dev, + "Mbox msg with unknown ID 0x%x\n", msg->id); + return; + } + + if (msg->sig !=3D OTX2_MBOX_RSP_SIG) { + dev_err(pfdev->dev, + "Mbox msg with wrong signature %x, ID 0x%x\n", + msg->sig, msg->id); + return; + } + + switch (msg->id) { + case MBOX_MSG_READY: + pfdev->pcifunc =3D msg->pcifunc; + break; + default: + if (msg->rc) + dev_err(pfdev->dev, + "Mbox msg response has err %d, ID 0x%x\n", + msg->rc, msg->id); + break; + } +} + +static void rvu_gen_pf_pfaf_mbox_handler(struct work_struct *work) +{ + struct otx2_mbox_dev *mdev; + struct gen_pf_dev *pfdev; + struct mbox_hdr *rsp_hdr; + struct mbox_msghdr *msg; + struct otx2_mbox *mbox; + struct mbox *af_mbox; + int offset, id; + u16 num_msgs; + + af_mbox =3D container_of(work, struct mbox, mbox_wrk); + mbox =3D &af_mbox->mbox; + mdev =3D &mbox->dev[0]; + rsp_hdr =3D (struct mbox_hdr *)(mdev->mbase + mbox->rx_start); + num_msgs =3D rsp_hdr->num_msgs; + + offset =3D mbox->rx_start + ALIGN(sizeof(*rsp_hdr), MBOX_MSG_ALIGN); + pfdev =3D af_mbox->pfvf; + + for (id =3D 0; id < num_msgs; id++) { + msg =3D (struct mbox_msghdr *)(mdev->mbase + offset); + rvu_gen_pf_process_pfaf_mbox_msg(pfdev, msg); + offset =3D mbox->rx_start + msg->next_msgoff; + if (mdev->msgs_acked =3D=3D (num_msgs - 1)) + __otx2_mbox_reset(mbox, 0); + mdev->msgs_acked++; + } +} + +static int rvu_gen_pf_pfaf_mbox_init(struct gen_pf_dev *pfdev) +{ + struct mbox *mbox =3D &pfdev->mbox; + void __iomem *hwbase; + int err; + + mbox->pfvf =3D pfdev; + pfdev->mbox_wq =3D alloc_ordered_workqueue("otx2_pfaf_mailbox", + WQ_HIGHPRI | WQ_MEM_RECLAIM); + + if (!pfdev->mbox_wq) + return -ENOMEM; + + /* + * Mailbox is a reserved memory (in RAM) region shared between + * admin function (i.e AF) and this PF, shouldn't be mapped as + * device memory to allow unaligned accesses. + */ + + hwbase =3D ioremap_wc(pci_resource_start(pfdev->pdev, PCI_MBOX_BAR_NUM), + MBOX_SIZE); + + if (!hwbase) { + dev_err(pfdev->dev, "Unable to map PFAF mailbox region\n"); + err =3D -ENOMEM; + goto exit; + } + + err =3D otx2_mbox_init(&mbox->mbox, hwbase, pfdev->pdev, pfdev->reg_base, + MBOX_DIR_PFAF, 1); + if (err) + goto exit; + + err =3D otx2_mbox_init(&mbox->mbox_up, hwbase, pfdev->pdev, pfdev->reg_ba= se, + MBOX_DIR_PFAF_UP, 1); + + if (err) + goto exit; + + err =3D otx2_mbox_bbuf_init(mbox, pfdev->pdev); + if (err) + goto exit; + + INIT_WORK(&mbox->mbox_wrk, rvu_gen_pf_pfaf_mbox_handler); + mutex_init(&mbox->lock); + + return 0; +exit: + rvu_gen_pf_pfaf_mbox_destroy(pfdev); + return err; +} + static int rvu_gen_pf_sriov_enable(struct pci_dev *pdev, int numvfs) { int ret; @@ -91,6 +319,7 @@ static int rvu_gen_pf_probe(struct pci_dev *pdev, const = struct pci_device_id *id { struct device *dev =3D &pdev->dev; struct gen_pf_dev *pfdev; + int num_vec; int err; =20 err =3D pcim_enable_device(pdev); @@ -123,13 +352,47 @@ static int rvu_gen_pf_probe(struct pci_dev *pdev, con= st struct pci_device_id *id pfdev->pdev =3D pdev; pfdev->dev =3D dev; pfdev->total_vfs =3D pci_sriov_get_totalvfs(pdev); + num_vec =3D pci_msix_vec_count(pdev); + pfdev->irq_name =3D devm_kmalloc_array(&pfdev->pdev->dev, num_vec, NAME_S= IZE, + GFP_KERNEL); + + /* Map CSRs */ + pfdev->reg_base =3D pcim_iomap(pdev, PCI_CFG_REG_BAR_NUM, 0); + if (!pfdev->reg_base) { + dev_err(dev, "Unable to map physical function CSRs, aborting\n"); + err =3D -ENOMEM; + goto err_release_regions; + } =20 err =3D rvu_gen_pf_check_pf_usable(pfdev); if (err) - goto err_release_regions; + goto err_pcim_iounmap; + + err =3D pci_alloc_irq_vectors(pfdev->pdev, num_vec, num_vec, PCI_IRQ_MSIX= ); + if (err < 0) { + dev_err(dev, "%s: Failed to alloc %d IRQ vectors\n", + __func__, num_vec); + goto err_pcim_iounmap; + } + + /* Init PF <=3D> AF mailbox stuff */ + err =3D rvu_gen_pf_pfaf_mbox_init(pfdev); + if (err) + goto err_free_irq_vectors; + + /* Register mailbox interrupt */ + err =3D rvu_gen_pf_register_mbox_intr(pfdev); + if (err) + goto err_mbox_destroy; =20 return 0; =20 +err_mbox_destroy: + rvu_gen_pf_pfaf_mbox_destroy(pfdev); +err_free_irq_vectors: + pci_free_irq_vectors(pfdev->pdev); +err_pcim_iounmap: + pcim_iounmap(pdev, pfdev->reg_base); err_release_regions: pci_release_regions(pdev); pci_set_drvdata(pdev, NULL); diff --git a/drivers/soc/marvell/rvu_gen_pf/gen_pf.h b/drivers/soc/marvell/= rvu_gen_pf/gen_pf.h index d89b674b1a0f..2019bea10ad0 100644 --- a/drivers/soc/marvell/rvu_gen_pf/gen_pf.h +++ b/drivers/soc/marvell/rvu_gen_pf/gen_pf.h @@ -5,15 +5,139 @@ */ #include #include +#include +#include "mbox.h" =20 #define RVU_PFFUNC(pf, func) \ ((((pf) & RVU_PFVF_PF_MASK) << RVU_PFVF_PF_SHIFT) | \ (((func) & RVU_PFVF_FUNC_MASK) << RVU_PFVF_FUNC_SHIFT)) =20 +#define NAME_SIZE 32 + +struct gen_pf_dev; + +struct mbox { + struct otx2_mbox mbox; + struct work_struct mbox_wrk; + struct otx2_mbox mbox_up; + struct work_struct mbox_up_wrk; + struct gen_pf_dev *pfvf; + void *bbuf_base; /* Bounce buffer for mbox memory */ + struct mutex lock; /* serialize mailbox access */ + int num_msgs; /* mbox number of messages */ + int up_num_msgs; /* mbox_up number of messages */ +}; + struct gen_pf_dev { struct pci_dev *pdev; struct device *dev; void __iomem *reg_base; + char *irq_name; + struct work_struct mbox_wrk; + struct work_struct mbox_wrk_up; + + /* Mbox */ + struct mbox mbox; + struct workqueue_struct *mbox_wq; + int pf; + u16 pcifunc; /* RVU PF_FUNC */ u8 total_vfs; }; + +/* Mbox APIs */ +static inline int rvu_gen_pf_sync_mbox_msg(struct mbox *mbox) +{ + int err; + + if (!otx2_mbox_nonempty(&mbox->mbox, 0)) + return 0; + otx2_mbox_msg_send(&mbox->mbox, 0); + err =3D otx2_mbox_wait_for_rsp(&mbox->mbox, 0); + if (err) + return err; + + return otx2_mbox_check_rsp_msgs(&mbox->mbox, 0); +} + +static inline int rvu_gen_pf_sync_mbox_up_msg(struct mbox *mbox, int devid) +{ + int err; + + if (!otx2_mbox_nonempty(&mbox->mbox_up, devid)) + return 0; + otx2_mbox_msg_send_up(&mbox->mbox_up, devid); + err =3D otx2_mbox_wait_for_rsp(&mbox->mbox_up, devid); + if (err) + return err; + + return otx2_mbox_check_rsp_msgs(&mbox->mbox_up, devid); +} + +#define M(_name, _id, _fn_name, _req_type, _rsp_type) \ +static struct _req_type __maybe_unused \ +*gen_pf_mbox_alloc_msg_ ## _fn_name(struct mbox *mbox) \ +{ \ + struct _req_type *req; \ + u16 id =3D _id; \ + \ + req =3D (struct _req_type *)otx2_mbox_alloc_msg_rsp( \ + &mbox->mbox, 0, sizeof(struct _req_type), \ + sizeof(struct _rsp_type)); \ + if (!req) \ + return NULL; \ + req->hdr.sig =3D OTX2_MBOX_REQ_SIG; \ + req->hdr.id =3D id; \ + trace_otx2_msg_alloc(mbox->mbox.pdev, id, sizeof(*req)); \ + return req; \ +} + +MBOX_MESSAGES +#undef M + +/* Mbox bounce buffer APIs */ +static inline int otx2_mbox_bbuf_init(struct mbox *mbox, struct pci_dev *p= dev) +{ + struct otx2_mbox *otx2_mbox; + struct otx2_mbox_dev *mdev; + + mbox->bbuf_base =3D devm_kmalloc(&pdev->dev, MBOX_SIZE, GFP_KERNEL); + + if (!mbox->bbuf_base) + return -ENOMEM; + + /* Overwrite mbox mbase to point to bounce buffer, so that PF/VF + * prepare all mbox messages in bounce buffer instead of directly + * in hw mbox memory. + */ + otx2_mbox =3D &mbox->mbox; + mdev =3D &otx2_mbox->dev[0]; + mdev->mbase =3D mbox->bbuf_base; + + otx2_mbox =3D &mbox->mbox_up; + mdev =3D &otx2_mbox->dev[0]; + mdev->mbase =3D mbox->bbuf_base; + return 0; +} + +static inline void otx2_sync_mbox_bbuf(struct otx2_mbox *mbox, int devid) +{ + u16 msgs_offset =3D ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN); + void *hw_mbase =3D mbox->hwbase + (devid * MBOX_SIZE); + struct otx2_mbox_dev *mdev =3D &mbox->dev[devid]; + struct mbox_hdr *hdr; + u64 msg_size; + + if (mdev->mbase =3D=3D hw_mbase) + return; + + hdr =3D hw_mbase + mbox->rx_start; + msg_size =3D hdr->msg_size; + + if (msg_size > mbox->rx_size - msgs_offset) + msg_size =3D mbox->rx_size - msgs_offset; + + /* Copy mbox messages from mbox memory to bounce buffer */ + memcpy(mdev->mbase + mbox->rx_start, + hw_mbase + mbox->rx_start, msg_size + msgs_offset); +} --=20 2.34.1 From nobody Sun Dec 14 12:17:30 2025 Received: from mx0a-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DD6201993B9 for ; Thu, 22 May 2025 05:45:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=67.231.148.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747892710; cv=none; b=L7euFRApdC1DtmlMNHEEmxsMDxgjLazUvy8Ad8l/MGDCySj72mfzwV1Sgr8VYs+Vm3Xfyj0J/vBATlDCwMFsAV8NmByDDT8s1zjxPttqyMW6Kzd59qZ4LFpSSwieMxzKzP7Vc8m6F9NFCIZQjM5lF65YbmTUh2QIcEyRUfAxW1g= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747892710; c=relaxed/simple; bh=Nwjn3MXj3E+CC5a+T4MfkZ9QJgLXAQI32waAIWrDL44=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=sUmZJlhA0hMV1z/f1z26TymddoCWlNtZEOUjH1vxn9Yaf3+tY2//Iv1GNP9qxLO9x6xudKyYf8ryvaG5TEeAu/61upO4RnVCRD2hwfTqepzB0t16gc9MBocp9Uul88o7tMku3MIQLE3XsMCbNiJFqn9l55k+Cd/tjhrVp3yQtFY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com; spf=pass smtp.mailfrom=marvell.com; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b=JJzKlaQw; arc=none smtp.client-ip=67.231.148.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=marvell.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="JJzKlaQw" Received: from pps.filterd (m0431384.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 54M0H9Ii018805; Wed, 21 May 2025 22:44:46 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=pfpt0220; bh=I 7w3K2LTowUITLEg84pFbQH0qRaB1re8Rok62FYBrUs=; b=JJzKlaQwPXPTEen7d SW5q/cjbFuS9r5KJd62vVhPH26Lo930IqND6T/rwzSIR7+wdpOutmaXXpjIHEJmx cHQRovXsc4hBIXjW8aEWLIF8nD1Sau4xPzyDwAxYszO4hW0lhZM5MLdAQfUrSTG5 4qatn1X7hMRm2r6OykOUyMuX9UAj9gPsVNZ7VRB+8ireyw8UFezdViHyf50JfYQt NhUQ3TSN4YhBHK+e2ad5VWYbKRYt38FjanCVYcnJafh4G9VIDjD3r0aykBCpGazo BifuOPPPMkBZpwRgltMWmY6uTgMEMQDL9tBkVrW2j+9JbgWaCuboRTpdTw6OeAfV v0UFA== Received: from dc5-exch05.marvell.com ([199.233.59.128]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 46sqap8nyb-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 21 May 2025 22:44:46 -0700 (PDT) Received: from DC5-EXCH05.marvell.com (10.69.176.209) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Wed, 21 May 2025 22:44:45 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Wed, 21 May 2025 22:44:45 -0700 Received: from tx2-sever.caveonetworks.com (unknown [10.110.141.15]) by maili.marvell.com (Postfix) with ESMTP id 0DFC75B694A; Wed, 21 May 2025 22:44:45 -0700 (PDT) From: George Cherian To: , , , , CC: , Subject: [PATCH v6 3/4] soc: marvell: rvu-pf: Add mailbox communication btw RVU VFs and PF. Date: Thu, 22 May 2025 05:44:43 +0000 Message-ID: <20250522054444.3531124-4-george.cherian@marvell.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250522054444.3531124-1-george.cherian@marvell.com> References: <20250522054444.3531124-1-george.cherian@marvell.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-GUID: SXFaJLuTSf1x4k3t5aIUD4pKmSkI6Ssg X-Authority-Analysis: v=2.4 cv=HfgUTjE8 c=1 sm=1 tr=0 ts=682eb9ce cx=c_pps a=rEv8fa4AjpPjGxpoe8rlIQ==:117 a=rEv8fa4AjpPjGxpoe8rlIQ==:17 a=dt9VzEwgFbYA:10 a=M5GUcnROAAAA:8 a=pGLkceISAAAA:8 a=rdbOFzbLXYbVr61zThgA:9 a=OBjm3rFKGHvpk9ecZwUJ:22 X-Proofpoint-ORIG-GUID: SXFaJLuTSf1x4k3t5aIUD4pKmSkI6Ssg X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNTIyMDA1NiBTYWx0ZWRfX2stQT1skBGJZ p2E0Tjoim+bRg0ENvrxbHpBz06ZXwIXqaIKdkktzbRhPhFXD6aTOmJPi4i6ei5I/+IySX1juA80 hw1lhDapwYKn1KcnkBCJaK9SiYF/w4UMpZ4/Nous0cVz0YHWGOxL+gzBftIq/xVm+fYQvo8oqPg jHYhP5sVmATS1y1AcoBeYHLROjed/RoxBMeRddDolFVO4Ntl/PrbqQht3u1PqJoeUhlVxpWdw8T rc23wSje7vf+8b9HBmRWMdmixDacTB5CQh1ArNuZFlAL6+rdCe7/U3aNhCe2cPunIleVNj5nFwe brZJNsYNL5yiIeeN1UQglnv6maGztgsnpEKBTPAbZ8oOR77kKh3FCZA9SmhYP26cyuHVMwvd8Rx 4LhsH3cKTvf5zZB25e9t84QSaAJrF+4Hn6ekivCKXvRCMK9NuHqCR5Bg1enGHoYg1V+gMYgI X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-05-22_03,2025-05-20_03,2025-03-28_01 Content-Type: text/plain; charset="utf-8" From: Anshumali Gaur RVU PF shares a dedicated memory region with each of it's VFs. This memory region is used to establish communication between them. Since Admin function (AF) handles resource management, PF doesn't process the messages sent by VFs. It acts as an intermediary device process the messages sent by VFs. It acts as an intermediary device. Hardware doesn't support direct communication between AF and VFs. Signed-off-by: Anshumali Gaur Signed-off-by: George Cherian Reviewed-by: Alexander Sverdlin --- drivers/soc/marvell/rvu_gen_pf/gen_pf.c | 437 ++++++++++++++++++++++++ drivers/soc/marvell/rvu_gen_pf/gen_pf.h | 2 + 2 files changed, 439 insertions(+) diff --git a/drivers/soc/marvell/rvu_gen_pf/gen_pf.c b/drivers/soc/marvell/= rvu_gen_pf/gen_pf.c index a03fc3f16c69..d7f96b9994cb 100644 --- a/drivers/soc/marvell/rvu_gen_pf/gen_pf.c +++ b/drivers/soc/marvell/rvu_gen_pf/gen_pf.c @@ -50,6 +50,120 @@ static int rvu_gen_pf_check_pf_usable(struct gen_pf_dev= *pfdev) return 0; } =20 +static void rvu_gen_pf_forward_msg_pfvf(struct otx2_mbox_dev *mdev, + struct otx2_mbox *pfvf_mbox, void *bbuf_base, + int devid) +{ + struct otx2_mbox_dev *src_mdev =3D mdev; + int offset; + + /* Msgs are already copied, trigger VF's mbox irq */ + smp_wmb(); + + otx2_mbox_wait_for_zero(pfvf_mbox, devid); + offset =3D pfvf_mbox->trigger | (devid << pfvf_mbox->tr_shift); + writeq(MBOX_DOWN_MSG, (void __iomem *)pfvf_mbox->reg_base + offset); + + /* Restore VF's mbox bounce buffer region address */ + src_mdev->mbase =3D bbuf_base; +} + +static int rvu_gen_pf_forward_vf_mbox_msgs(struct gen_pf_dev *pfdev, + struct otx2_mbox *src_mbox, + int dir, int vf, int num_msgs) +{ + struct otx2_mbox_dev *src_mdev, *dst_mdev; + struct mbox_hdr *mbox_hdr; + struct mbox_hdr *req_hdr; + struct mbox *dst_mbox; + int dst_size, err; + + if (dir =3D=3D MBOX_DIR_PFAF) { + /* + * Set VF's mailbox memory as PF's bounce buffer memory, so + * that explicit copying of VF's msgs to PF=3D>AF mbox region + * and AF=3D>PF responses to VF's mbox region can be avoided. + */ + src_mdev =3D &src_mbox->dev[vf]; + mbox_hdr =3D src_mbox->hwbase + + src_mbox->rx_start + (vf * MBOX_SIZE); + + dst_mbox =3D &pfdev->mbox; + dst_size =3D dst_mbox->mbox.tx_size - + ALIGN(sizeof(*mbox_hdr), MBOX_MSG_ALIGN); + /* Check if msgs fit into destination area and has valid size */ + if (mbox_hdr->msg_size > dst_size || !mbox_hdr->msg_size) + return -EINVAL; + + dst_mdev =3D &dst_mbox->mbox.dev[0]; + + mutex_lock(&pfdev->mbox.lock); + dst_mdev->mbase =3D src_mdev->mbase; + dst_mdev->msg_size =3D mbox_hdr->msg_size; + dst_mdev->num_msgs =3D num_msgs; + err =3D rvu_gen_pf_sync_mbox_msg(dst_mbox); + /* + * Error code -EIO indicate there is a communication failure + * to the AF. Rest of the error codes indicate that AF processed + * VF messages and set the error codes in response messages + * (if any) so simply forward responses to VF. + */ + if (err =3D=3D -EIO) { + dev_warn(pfdev->dev, + "AF not responding to VF%d messages\n", vf); + /* restore PF mbase and exit */ + dst_mdev->mbase =3D pfdev->mbox.bbuf_base; + mutex_unlock(&pfdev->mbox.lock); + return err; + } + /* + * At this point, all the VF messages sent to AF are acked + * with proper responses and responses are copied to VF + * mailbox hence raise interrupt to VF. + */ + req_hdr =3D (struct mbox_hdr *)(dst_mdev->mbase + + dst_mbox->mbox.rx_start); + req_hdr->num_msgs =3D num_msgs; + + rvu_gen_pf_forward_msg_pfvf(dst_mdev, &pfdev->mbox_pfvf[0].mbox, + pfdev->mbox.bbuf_base, vf); + mutex_unlock(&pfdev->mbox.lock); + } else if (dir =3D=3D MBOX_DIR_PFVF_UP) { + src_mdev =3D &src_mbox->dev[0]; + mbox_hdr =3D src_mbox->hwbase + src_mbox->rx_start; + req_hdr =3D (struct mbox_hdr *)(src_mdev->mbase + + src_mbox->rx_start); + req_hdr->num_msgs =3D num_msgs; + + dst_mbox =3D &pfdev->mbox_pfvf[0]; + dst_size =3D dst_mbox->mbox_up.tx_size - + ALIGN(sizeof(*mbox_hdr), MBOX_MSG_ALIGN); + /* Check if msgs fit into destination area */ + if (mbox_hdr->msg_size > dst_size) + return -EINVAL; + dst_mdev =3D &dst_mbox->mbox_up.dev[vf]; + dst_mdev->mbase =3D src_mdev->mbase; + dst_mdev->msg_size =3D mbox_hdr->msg_size; + dst_mdev->num_msgs =3D mbox_hdr->num_msgs; + err =3D rvu_gen_pf_sync_mbox_up_msg(dst_mbox, vf); + if (err) { + dev_warn(pfdev->dev, + "VF%d is not responding to mailbox\n", vf); + return err; + } + } else if (dir =3D=3D MBOX_DIR_VFPF_UP) { + req_hdr =3D (struct mbox_hdr *)(src_mbox->dev[0].mbase + + src_mbox->rx_start); + req_hdr->num_msgs =3D num_msgs; + rvu_gen_pf_forward_msg_pfvf(&pfdev->mbox_pfvf->mbox_up.dev[vf], + &pfdev->mbox.mbox_up, + pfdev->mbox_pfvf[vf].bbuf_base, + 0); + } + + return 0; +} + static irqreturn_t rvu_gen_pf_pfaf_mbox_intr_handler(int irq, void *pf_irq) { struct gen_pf_dev *pfdev =3D (struct gen_pf_dev *)pf_irq; @@ -192,6 +306,39 @@ static void rvu_gen_pf_process_pfaf_mbox_msg(struct ge= n_pf_dev *pfdev, } } =20 +static void rvu_gen_pf_pfaf_mbox_up_handler(struct work_struct *work) +{ + struct mbox *af_mbox =3D container_of(work, struct mbox, mbox_up_wrk); + struct otx2_mbox *mbox =3D &af_mbox->mbox_up; + struct otx2_mbox_dev *mdev =3D &mbox->dev[0]; + struct gen_pf_dev *pfdev =3D af_mbox->pfvf; + int offset, id, devid =3D 0; + struct mbox_hdr *rsp_hdr; + struct mbox_msghdr *msg; + u16 num_msgs; + + rsp_hdr =3D (struct mbox_hdr *)(mdev->mbase + mbox->rx_start); + num_msgs =3D rsp_hdr->num_msgs; + + offset =3D mbox->rx_start + ALIGN(sizeof(*rsp_hdr), MBOX_MSG_ALIGN); + + for (id =3D 0; id < num_msgs; id++) { + msg =3D (struct mbox_msghdr *)(mdev->mbase + offset); + + devid =3D msg->pcifunc & RVU_PFVF_FUNC_MASK; + offset =3D mbox->rx_start + msg->next_msgoff; + } + /* Forward to VF iff VFs are really present */ + if (devid && pci_num_vf(pfdev->pdev)) { + rvu_gen_pf_forward_vf_mbox_msgs(pfdev, &pfdev->mbox.mbox_up, + MBOX_DIR_PFVF_UP, devid - 1, + num_msgs); + return; + } + + otx2_mbox_msg_send(mbox, 0); +} + static void rvu_gen_pf_pfaf_mbox_handler(struct work_struct *work) { struct otx2_mbox_dev *mdev; @@ -266,6 +413,7 @@ static int rvu_gen_pf_pfaf_mbox_init(struct gen_pf_dev = *pfdev) goto exit; =20 INIT_WORK(&mbox->mbox_wrk, rvu_gen_pf_pfaf_mbox_handler); + INIT_WORK(&mbox->mbox_up_wrk, rvu_gen_pf_pfaf_mbox_up_handler); mutex_init(&mbox->lock); =20 return 0; @@ -274,19 +422,305 @@ static int rvu_gen_pf_pfaf_mbox_init(struct gen_pf_d= ev *pfdev) return err; } =20 +static void rvu_gen_pf_pfvf_mbox_handler(struct work_struct *work) +{ + struct mbox_msghdr *msg =3D NULL; + int offset, vf_idx, id, err; + struct otx2_mbox_dev *mdev; + struct gen_pf_dev *pfdev; + struct mbox_hdr *req_hdr; + struct otx2_mbox *mbox; + struct mbox *vf_mbox; + + vf_mbox =3D container_of(work, struct mbox, mbox_wrk); + pfdev =3D vf_mbox->pfvf; + vf_idx =3D vf_mbox - pfdev->mbox_pfvf; + + mbox =3D &pfdev->mbox_pfvf[0].mbox; + mdev =3D &mbox->dev[vf_idx]; + req_hdr =3D (struct mbox_hdr *)(mdev->mbase + mbox->rx_start); + + offset =3D ALIGN(sizeof(*req_hdr), MBOX_MSG_ALIGN); + + for (id =3D 0; id < vf_mbox->num_msgs; id++) { + msg =3D (struct mbox_msghdr *)(mdev->mbase + mbox->rx_start + + offset); + + if (msg->sig !=3D OTX2_MBOX_REQ_SIG) + goto inval_msg; + + /* Set VF's number in each of the msg */ + msg->pcifunc &=3D ~RVU_PFVF_FUNC_MASK; + msg->pcifunc |=3D (vf_idx + 1) & RVU_PFVF_FUNC_MASK; + offset =3D msg->next_msgoff; + } + err =3D rvu_gen_pf_forward_vf_mbox_msgs(pfdev, mbox, MBOX_DIR_PFAF, vf_id= x, + vf_mbox->num_msgs); + if (err) + goto inval_msg; + return; + +inval_msg: + if (!msg) + return; + + otx2_reply_invalid_msg(mbox, vf_idx, 0, msg->id); + otx2_mbox_msg_send(mbox, vf_idx); +} + +static int rvu_gen_pf_pfvf_mbox_init(struct gen_pf_dev *pfdev, int numvfs) +{ + void __iomem *hwbase; + struct mbox *mbox; + int err, vf; + u64 base; + + if (!numvfs) + return -EINVAL; + + pfdev->mbox_pfvf =3D devm_kcalloc(&pfdev->pdev->dev, numvfs, + sizeof(struct mbox), GFP_KERNEL); + + if (!pfdev->mbox_pfvf) + return -ENOMEM; + + pfdev->mbox_pfvf_wq =3D alloc_workqueue("otx2_pfvf_mailbox", + WQ_UNBOUND | WQ_HIGHPRI | + WQ_MEM_RECLAIM, 0); + if (!pfdev->mbox_pfvf_wq) + return -ENOMEM; + + /* + * PF <-> VF mailbox region follows after + * PF <-> AF mailbox region. + */ + base =3D pci_resource_start(pfdev->pdev, PCI_MBOX_BAR_NUM) + MBOX_SIZE; + + hwbase =3D ioremap_wc(base, MBOX_SIZE * pfdev->total_vfs); + if (!hwbase) { + err =3D -ENOMEM; + goto free_wq; + } + + mbox =3D &pfdev->mbox_pfvf[0]; + err =3D otx2_mbox_init(&mbox->mbox, hwbase, pfdev->pdev, pfdev->reg_base, + MBOX_DIR_PFVF, numvfs); + if (err) + goto free_iomem; + + err =3D otx2_mbox_init(&mbox->mbox_up, hwbase, pfdev->pdev, pfdev->reg_ba= se, + MBOX_DIR_PFVF_UP, numvfs); + if (err) + goto free_iomem; + + for (vf =3D 0; vf < numvfs; vf++) { + mbox->pfvf =3D pfdev; + INIT_WORK(&mbox->mbox_wrk, rvu_gen_pf_pfvf_mbox_handler); + mbox++; + } + + return 0; + +free_iomem: + if (hwbase) + iounmap(hwbase); +free_wq: + destroy_workqueue(pfdev->mbox_pfvf_wq); + return err; +} + +static void rvu_gen_pf_pfvf_mbox_destroy(struct gen_pf_dev *pfdev) +{ + struct mbox *mbox =3D &pfdev->mbox_pfvf[0]; + + if (!mbox) + return; + + if (pfdev->mbox_pfvf_wq) { + destroy_workqueue(pfdev->mbox_pfvf_wq); + pfdev->mbox_pfvf_wq =3D NULL; + } + + if (mbox->mbox.hwbase) + iounmap((void __iomem *)mbox->mbox.hwbase); + + otx2_mbox_destroy(&mbox->mbox); +} + +static void rvu_gen_pf_enable_pfvf_mbox_intr(struct gen_pf_dev *pfdev, int= numvfs) +{ + /* Clear PF <=3D> VF mailbox IRQ */ + writeq(~0ull, pfdev->reg_base + RVU_PF_VFPF_MBOX_INTX(0)); + writeq(~0ull, pfdev->reg_base + RVU_PF_VFPF_MBOX_INTX(1)); + + /* Enable PF <=3D> VF mailbox IRQ */ + writeq(INTR_MASK(numvfs), pfdev->reg_base + RVU_PF_VFPF_MBOX_INT_ENA_W1SX= (0)); + if (numvfs > 64) { + numvfs -=3D 64; + writeq(INTR_MASK(numvfs), pfdev->reg_base + RVU_PF_VFPF_MBOX_INT_ENA_W1S= X(1)); + } +} + +static void rvu_gen_pf_disable_pfvf_mbox_intr(struct gen_pf_dev *pfdev, in= t numvfs) +{ + int vector; + + /* Disable PF <=3D> VF mailbox IRQ */ + writeq(~0ull, pfdev->reg_base + RVU_PF_VFPF_MBOX_INT_ENA_W1CX(0)); + writeq(~0ull, pfdev->reg_base + RVU_PF_VFPF_MBOX_INT_ENA_W1CX(1)); + + writeq(~0ull, pfdev->reg_base + RVU_PF_VFPF_MBOX_INTX(0)); + vector =3D pci_irq_vector(pfdev->pdev, RVU_PF_INT_VEC_VFPF_MBOX0); + free_irq(vector, pfdev); + + if (numvfs > 64) { + writeq(~0ull, pfdev->reg_base + RVU_PF_VFPF_MBOX_INTX(1)); + vector =3D pci_irq_vector(pfdev->pdev, RVU_PF_INT_VEC_VFPF_MBOX1); + free_irq(vector, pfdev); + } +} + +static void rvu_gen_pf_queue_vf_work(struct mbox *mw, struct workqueue_str= uct *mbox_wq, + int first, int mdevs, u64 intr) +{ + struct otx2_mbox_dev *mdev; + struct otx2_mbox *mbox; + struct mbox_hdr *hdr; + int i; + + for (i =3D first; i < mdevs; i++) { + /* start from 0 */ + if (!(intr & BIT_ULL(i - first))) + continue; + + mbox =3D &mw->mbox; + mdev =3D &mbox->dev[i]; + hdr =3D mdev->mbase + mbox->rx_start; + /* + * The hdr->num_msgs is set to zero immediately in the interrupt + * handler to ensure that it holds a correct value next time + * when the interrupt handler is called. pf->mw[i].num_msgs + * holds the data for use in otx2_pfvf_mbox_handler and + * pf->mw[i].up_num_msgs holds the data for use in + * otx2_pfvf_mbox_up_handler. + */ + if (hdr->num_msgs) { + mw[i].num_msgs =3D hdr->num_msgs; + hdr->num_msgs =3D 0; + queue_work(mbox_wq, &mw[i].mbox_wrk); + } + + mbox =3D &mw->mbox_up; + mdev =3D &mbox->dev[i]; + hdr =3D mdev->mbase + mbox->rx_start; + if (hdr->num_msgs) { + mw[i].up_num_msgs =3D hdr->num_msgs; + hdr->num_msgs =3D 0; + queue_work(mbox_wq, &mw[i].mbox_up_wrk); + } + } +} + +static irqreturn_t rvu_gen_pf_pfvf_mbox_intr_handler(int irq, void *pf_irq) +{ + struct gen_pf_dev *pfdev =3D (struct gen_pf_dev *)(pf_irq); + int vfs =3D pfdev->total_vfs; + struct mbox *mbox; + u64 intr; + + mbox =3D pfdev->mbox_pfvf; + /* Handle VF interrupts */ + if (vfs > 64) { + intr =3D readq(pfdev->reg_base + RVU_PF_VFPF_MBOX_INTX(1)); + writeq(intr, pfdev->reg_base + RVU_PF_VFPF_MBOX_INTX(1)); + rvu_gen_pf_queue_vf_work(mbox, pfdev->mbox_pfvf_wq, 64, vfs, intr); + if (intr) + trace_otx2_msg_interrupt(mbox->mbox.pdev, "VF(s) to PF", intr); + vfs =3D 64; + } + + intr =3D readq(pfdev->reg_base + RVU_PF_VFPF_MBOX_INTX(0)); + writeq(intr, pfdev->reg_base + RVU_PF_VFPF_MBOX_INTX(0)); + + rvu_gen_pf_queue_vf_work(mbox, pfdev->mbox_pfvf_wq, 0, vfs, intr); + + if (intr) + trace_otx2_msg_interrupt(mbox->mbox.pdev, "VF(s) to PF", intr); + + return IRQ_HANDLED; +} + +static int rvu_gen_pf_register_pfvf_mbox_intr(struct gen_pf_dev *pfdev, in= t numvfs) +{ + char *irq_name; + int err; + + /* Register MBOX0 interrupt handler */ + irq_name =3D &pfdev->irq_name[RVU_PF_INT_VEC_VFPF_MBOX0 * NAME_SIZE]; + if (pfdev->pcifunc) + snprintf(irq_name, NAME_SIZE, + "Generic RVUPF%d_VF Mbox0", rvu_get_pf(pfdev->pcifunc)); + else + snprintf(irq_name, NAME_SIZE, "Generic RVUPF_VF Mbox0"); + err =3D request_irq(pci_irq_vector(pfdev->pdev, RVU_PF_INT_VEC_VFPF_MBOX0= ), + rvu_gen_pf_pfvf_mbox_intr_handler, 0, irq_name, pfdev); + if (err) { + dev_err(pfdev->dev, + "RVUPF: IRQ registration failed for PFVF mbox0 irq\n"); + return err; + } + + if (numvfs > 64) { + /* Register MBOX1 interrupt handler */ + irq_name =3D &pfdev->irq_name[RVU_PF_INT_VEC_VFPF_MBOX1 * NAME_SIZE]; + if (pfdev->pcifunc) + snprintf(irq_name, NAME_SIZE, + "Generic RVUPF%d_VF Mbox1", rvu_get_pf(pfdev->pcifunc)); + else + snprintf(irq_name, NAME_SIZE, "Generic RVUPF_VF Mbox1"); + err =3D request_irq(pci_irq_vector(pfdev->pdev, + RVU_PF_INT_VEC_VFPF_MBOX1), + rvu_gen_pf_pfvf_mbox_intr_handler, + 0, irq_name, pfdev); + if (err) { + dev_err(pfdev->dev, + "RVUPF: IRQ registration failed for PFVF mbox1 irq\n"); + return err; + } + } + + rvu_gen_pf_enable_pfvf_mbox_intr(pfdev, numvfs); + + return 0; +} + static int rvu_gen_pf_sriov_enable(struct pci_dev *pdev, int numvfs) { + struct gen_pf_dev *pfdev =3D pci_get_drvdata(pdev); int ret; =20 + /* Init PF <=3D> VF mailbox stuff */ + ret =3D rvu_gen_pf_pfvf_mbox_init(pfdev, numvfs); + if (ret) + return ret; + + ret =3D rvu_gen_pf_register_pfvf_mbox_intr(pfdev, numvfs); + if (ret) + goto free_mbox; + ret =3D pci_enable_sriov(pdev, numvfs); if (ret) return ret; =20 return numvfs; +free_mbox: + rvu_gen_pf_pfvf_mbox_destroy(pfdev); + return ret; } =20 static int rvu_gen_pf_sriov_disable(struct pci_dev *pdev) { + struct gen_pf_dev *pfdev =3D pci_get_drvdata(pdev); int numvfs =3D pci_num_vf(pdev); =20 if (!numvfs) @@ -294,6 +728,9 @@ static int rvu_gen_pf_sriov_disable(struct pci_dev *pde= v) =20 pci_disable_sriov(pdev); =20 + rvu_gen_pf_disable_pfvf_mbox_intr(pfdev, numvfs); + rvu_gen_pf_pfvf_mbox_destroy(pfdev); + return 0; } =20 diff --git a/drivers/soc/marvell/rvu_gen_pf/gen_pf.h b/drivers/soc/marvell/= rvu_gen_pf/gen_pf.h index 2019bea10ad0..ad651b97b661 100644 --- a/drivers/soc/marvell/rvu_gen_pf/gen_pf.h +++ b/drivers/soc/marvell/rvu_gen_pf/gen_pf.h @@ -38,7 +38,9 @@ struct gen_pf_dev { =20 /* Mbox */ struct mbox mbox; + struct mbox *mbox_pfvf; struct workqueue_struct *mbox_wq; + struct workqueue_struct *mbox_pfvf_wq; =20 int pf; u16 pcifunc; /* RVU PF_FUNC */ --=20 2.34.1 From nobody Sun Dec 14 12:17:30 2025 Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A537B1A4E70 for ; Thu, 22 May 2025 05:44:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=67.231.156.173 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747892698; cv=none; b=QsouOmhNDGf58uQu7EVP8vCp1/AVT00Xow25LHWlnJTEgsXSmHR9eNr/Ipbbjt/Az0p+g1rmqP8+0uja77BXNQh5NRbTcHkLVoKbm0jFI/smPqpNe2+iASV+TRO66G5rB7+7C9HQEDj+FiErra6QThqxp003xgaMy2yBoaemU2g= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747892698; c=relaxed/simple; bh=sMM6T+Lz3X2wXsCZZmH9P0KQk3BRY7oGiOm25KqMIn0=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=ZxXH3XnIkjVvNdDYh8faVdiBJOh3qGtVi6f7v4+DM4c2YJwOhVAFeirh/lAhBE6FRnGMnVm626qho0Z50pOrD1jafQ/79VLpUKDmbnAWiHpc3lVLkZmYp/cVPvTJVSRKeUR1mdP0zVwQSpiLb5f3XsRL8YUg+/duWdy31nrgsA0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com; spf=pass smtp.mailfrom=marvell.com; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b=VIY2GxyH; arc=none smtp.client-ip=67.231.156.173 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=marvell.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="VIY2GxyH" Received: from pps.filterd (m0431383.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 54M5JgrM013402; Wed, 21 May 2025 22:44:47 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=pfpt0220; bh=9 EGIyDYUqUFKTDf6MLIxK/cIfbCO1p9jXW4iLEpu8Xw=; b=VIY2GxyHjwnNPh5vW 0r6NE4lyFe5WukGyWFhMKmuC+fq4gDTIfbtu4w1Au/DbkLFcT1b21ri1n0cynvDG mQcJatN7N1LdpYJcQ/NL42uQ6fv1iRQX+bSqDee6qVArP0UHWALS6wJVnBCwwNum +yJuB+UsEZIdPotd6sUpE01kZCYq9SwQX7c9JXQBc1gpp8MMLpkeHaGXsGfDdDgh y+WX1kdIz54cN/pDEd6LdU8Zytr/TVsn1pdrOAzKDw1RoZR+qu439a3rlk0UzBO9 UszxYlAr4o7dGrYSXPd/cdM990lBS/drIj4x4WR5xo4HnEB9FVHNj9q+kLuBYmCd wvnpQ== Received: from dc6wp-exch02.marvell.com ([4.21.29.225]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 46swp68100-3 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 21 May 2025 22:44:46 -0700 (PDT) Received: from DC6WP-EXCH02.marvell.com (10.76.176.209) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Wed, 21 May 2025 22:44:45 -0700 Received: from maili.marvell.com (10.69.176.80) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Wed, 21 May 2025 22:44:45 -0700 Received: from tx2-sever.caveonetworks.com (unknown [10.110.141.15]) by maili.marvell.com (Postfix) with ESMTP id 3C5045B694D; Wed, 21 May 2025 22:44:45 -0700 (PDT) From: George Cherian To: , , , , CC: , Subject: [PATCH v6 4/4] soc: marvell: rvu-pf: Handle function level reset (FLR) IRQs for VFs Date: Thu, 22 May 2025 05:44:44 +0000 Message-ID: <20250522054444.3531124-5-george.cherian@marvell.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250522054444.3531124-1-george.cherian@marvell.com> References: <20250522054444.3531124-1-george.cherian@marvell.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-GUID: HvZ9ux4A3woTbvPENqFlu3RmzkyGogp0 X-Authority-Analysis: v=2.4 cv=DO+P4zNb c=1 sm=1 tr=0 ts=682eb9cf cx=c_pps a=gIfcoYsirJbf48DBMSPrZA==:117 a=gIfcoYsirJbf48DBMSPrZA==:17 a=dt9VzEwgFbYA:10 a=M5GUcnROAAAA:8 a=pGLkceISAAAA:8 a=3ie1WVgr3YDlqjzgjNwA:9 a=OBjm3rFKGHvpk9ecZwUJ:22 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNTIyMDA1NiBTYWx0ZWRfX6Uo8xHdby6rJ SBAnRyR3X+d9D10i38u6cS6I223/IDriIucwvfTYEkPs/xfaJ9Dx+ysYoDKWjszu10xnP42L7z+ Cu7sv26pP6lCxWiK+C60B/Rbw9BnJC2IyRJyCzUrqFRawtQeCdGet4tyY1WVLRH67EXtPd/lqNN gv5I8z6B0srRphbgzhZdw57dexQQrUJY/tvACL/X0IMA9M1nnLE2xvq7OXCuI2YmsLqi/+q6MgP j+friiLSCncMVl6vvQOrr88FQLDLrxCq0n+22fTVwqQSQrSIvCSGKfXY1HdXnTrf9jowsNnwVRB nGGrZXkywz9cZlD0r6kN5wz/h7109IwSVQAOO5krSPgdQ/IwbiJZ4JVvOnNO7ZLkJh6vIxq/uR3 0DnOF1pQQXVfuZLr6PPhESJ57Fe1VicheDXx2KuHNGsWQadBCFyvDgzipSSIfdZZcz21OvLK X-Proofpoint-ORIG-GUID: HvZ9ux4A3woTbvPENqFlu3RmzkyGogp0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-05-22_03,2025-05-20_03,2025-03-28_01 Content-Type: text/plain; charset="utf-8" From: Anshumali Gaur Added PCIe FLR interrupt handler for VFs. When FLR is triggered for VFs, parent PF gets an interrupt. PF creates a mbox message and sends it to RVU Admin function (AF). AF cleans up all the resources attached to that specific VF and acks the PF that FLR is handled. Signed-off-by: Anshumali Gaur Signed-off-by: George Cherian Reviewed-by: Alexander Sverdlin --- drivers/soc/marvell/rvu_gen_pf/gen_pf.c | 232 +++++++++++++++++++++++- drivers/soc/marvell/rvu_gen_pf/gen_pf.h | 7 + 2 files changed, 238 insertions(+), 1 deletion(-) diff --git a/drivers/soc/marvell/rvu_gen_pf/gen_pf.c b/drivers/soc/marvell/= rvu_gen_pf/gen_pf.c index d7f96b9994cb..4f147cd0d43b 100644 --- a/drivers/soc/marvell/rvu_gen_pf/gen_pf.c +++ b/drivers/soc/marvell/rvu_gen_pf/gen_pf.c @@ -621,6 +621,15 @@ static void rvu_gen_pf_queue_vf_work(struct mbox *mw, = struct workqueue_struct *m } } =20 +static void rvu_gen_pf_flr_wq_destroy(struct gen_pf_dev *pfdev) +{ + if (!pfdev->flr_wq) + return; + destroy_workqueue(pfdev->flr_wq); + pfdev->flr_wq =3D NULL; + devm_kfree(pfdev->dev, pfdev->flr_wrk); +} + static irqreturn_t rvu_gen_pf_pfvf_mbox_intr_handler(int irq, void *pf_irq) { struct gen_pf_dev *pfdev =3D (struct gen_pf_dev *)(pf_irq); @@ -694,6 +703,211 @@ static int rvu_gen_pf_register_pfvf_mbox_intr(struct = gen_pf_dev *pfdev, int numv return 0; } =20 +static void rvu_gen_pf_flr_handler(struct work_struct *work) +{ + struct flr_work *flrwork =3D container_of(work, struct flr_work, work); + struct gen_pf_dev *pfdev =3D flrwork->pfdev; + struct mbox *mbox =3D &pfdev->mbox; + struct msg_req *req; + int vf, reg =3D 0; + + vf =3D flrwork - pfdev->flr_wrk; + + mutex_lock(&mbox->lock); + req =3D gen_pf_mbox_alloc_msg_vf_flr(mbox); + if (!req) { + mutex_unlock(&mbox->lock); + return; + } + req->hdr.pcifunc &=3D ~RVU_PFVF_FUNC_MASK; + req->hdr.pcifunc |=3D (vf + 1) & RVU_PFVF_FUNC_MASK; + + if (!rvu_gen_pf_sync_mbox_msg(&pfdev->mbox)) { + if (vf >=3D 64) { + reg =3D 1; + vf =3D vf - 64; + } + /* clear transcation pending bit */ + writeq(BIT_ULL(vf), pfdev->reg_base + RVU_PF_VFTRPENDX(reg)); + writeq(BIT_ULL(vf), pfdev->reg_base + RVU_PF_VFFLR_INT_ENA_W1SX(reg)); + } + + mutex_unlock(&mbox->lock); +} + +static irqreturn_t rvu_gen_pf_me_intr_handler(int irq, void *pf_irq) +{ + struct gen_pf_dev *pfdev =3D (struct gen_pf_dev *)pf_irq; + int vf, reg, num_reg =3D 1; + u64 intr; + + if (pfdev->total_vfs > 64) + num_reg =3D 2; + + for (reg =3D 0; reg < num_reg; reg++) { + intr =3D readq(pfdev->reg_base + RVU_PF_VFME_INTX(reg)); + if (!intr) + continue; + for (vf =3D 0; vf < 64; vf++) { + if (!(intr & BIT_ULL(vf))) + continue; + /* clear trpend bit */ + writeq(BIT_ULL(vf), pfdev->reg_base + RVU_PF_VFTRPENDX(reg)); + /* clear interrupt */ + writeq(BIT_ULL(vf), pfdev->reg_base + RVU_PF_VFME_INTX(reg)); + } + } + return IRQ_HANDLED; +} + +static irqreturn_t rvu_gen_pf_flr_intr_handler(int irq, void *pf_irq) +{ + struct gen_pf_dev *pfdev =3D (struct gen_pf_dev *)pf_irq; + int reg, dev, vf, start_vf, num_reg =3D 1; + u64 intr; + + if (pfdev->total_vfs > 64) + num_reg =3D 2; + + for (reg =3D 0; reg < num_reg; reg++) { + intr =3D readq(pfdev->reg_base + RVU_PF_VFFLR_INTX(reg)); + if (!intr) + continue; + start_vf =3D 64 * reg; + for (vf =3D 0; vf < 64; vf++) { + if (!(intr & BIT_ULL(vf))) + continue; + dev =3D vf + start_vf; + queue_work(pfdev->flr_wq, &pfdev->flr_wrk[dev].work); + /* Clear interrupt */ + writeq(BIT_ULL(vf), pfdev->reg_base + RVU_PF_VFFLR_INTX(reg)); + /* Disable the interrupt */ + writeq(BIT_ULL(vf), pfdev->reg_base + RVU_PF_VFFLR_INT_ENA_W1CX(reg)); + } + } + return IRQ_HANDLED; +} + +static int rvu_gen_pf_register_flr_me_intr(struct gen_pf_dev *pfdev, int n= umvfs) +{ + char *irq_name; + int ret; + + /* Register ME interrupt handler*/ + irq_name =3D &pfdev->irq_name[RVU_PF_INT_VEC_VFME0 * NAME_SIZE]; + snprintf(irq_name, NAME_SIZE, "Generic RVUPF%d_ME0", rvu_get_pf(pfdev->pc= ifunc)); + ret =3D request_irq(pci_irq_vector(pfdev->pdev, RVU_PF_INT_VEC_VFME0), + rvu_gen_pf_me_intr_handler, 0, irq_name, pfdev); + + if (ret) { + dev_err(pfdev->dev, + "Generic RVUPF: IRQ registration failed for ME0\n"); + } + + /* Register FLR interrupt handler */ + irq_name =3D &pfdev->irq_name[RVU_PF_INT_VEC_VFFLR0 * NAME_SIZE]; + snprintf(irq_name, NAME_SIZE, "Generic RVUPF%d_FLR0", rvu_get_pf(pfdev->p= cifunc)); + ret =3D request_irq(pci_irq_vector(pfdev->pdev, RVU_PF_INT_VEC_VFFLR0), + rvu_gen_pf_flr_intr_handler, 0, irq_name, pfdev); + if (ret) { + dev_err(pfdev->dev, + "Generic RVUPF: IRQ registration failed for FLR0\n"); + return ret; + } + + if (numvfs > 64) { + irq_name =3D &pfdev->irq_name[RVU_PF_INT_VEC_VFME1 * NAME_SIZE]; + snprintf(irq_name, NAME_SIZE, "Generic RVUPF%d_ME1", + rvu_get_pf(pfdev->pcifunc)); + ret =3D request_irq(pci_irq_vector + (pfdev->pdev, RVU_PF_INT_VEC_VFME1), + rvu_gen_pf_me_intr_handler, 0, irq_name, pfdev); + if (ret) { + dev_err(pfdev->dev, + "Generic RVUPF: IRQ registration failed for ME1\n"); + } + irq_name =3D &pfdev->irq_name[RVU_PF_INT_VEC_VFFLR1 * NAME_SIZE]; + snprintf(irq_name, NAME_SIZE, "Generic RVUPF%d_FLR1", + rvu_get_pf(pfdev->pcifunc)); + ret =3D request_irq(pci_irq_vector + (pfdev->pdev, RVU_PF_INT_VEC_VFFLR1), + rvu_gen_pf_flr_intr_handler, 0, irq_name, pfdev); + if (ret) { + dev_err(pfdev->dev, + "Generic RVUPF: IRQ registration failed for FLR1\n"); + return ret; + } + } + + /* Enable ME interrupt for all VFs*/ + writeq(INTR_MASK(numvfs), pfdev->reg_base + RVU_PF_VFME_INTX(0)); + writeq(INTR_MASK(numvfs), pfdev->reg_base + RVU_PF_VFME_INT_ENA_W1SX(0)); + + /* Enable FLR interrupt for all VFs*/ + writeq(INTR_MASK(numvfs), pfdev->reg_base + RVU_PF_VFFLR_INTX(0)); + writeq(INTR_MASK(numvfs), pfdev->reg_base + RVU_PF_VFFLR_INT_ENA_W1SX(0)); + + if (numvfs > 64) { + numvfs -=3D 64; + + writeq(INTR_MASK(numvfs), pfdev->reg_base + RVU_PF_VFME_INTX(1)); + writeq(INTR_MASK(numvfs), pfdev->reg_base + RVU_PF_VFME_INT_ENA_W1SX(1)); + + writeq(INTR_MASK(numvfs), pfdev->reg_base + RVU_PF_VFFLR_INTX(1)); + writeq(INTR_MASK(numvfs), pfdev->reg_base + RVU_PF_VFFLR_INT_ENA_W1SX(1)= ); + } + return 0; +} + +static void rvu_gen_pf_disable_flr_me_intr(struct gen_pf_dev *pfdev) +{ + int irq, vfs =3D pfdev->total_vfs; + + /* Disable VFs ME interrupts */ + writeq(INTR_MASK(vfs), pfdev->reg_base + RVU_PF_VFME_INT_ENA_W1CX(0)); + irq =3D pci_irq_vector(pfdev->pdev, RVU_PF_INT_VEC_VFME0); + free_irq(irq, pfdev); + + /* Disable VFs FLR interrupts */ + writeq(INTR_MASK(vfs), pfdev->reg_base + RVU_PF_VFFLR_INT_ENA_W1CX(0)); + irq =3D pci_irq_vector(pfdev->pdev, RVU_PF_INT_VEC_VFFLR0); + free_irq(irq, pfdev); + + if (vfs <=3D 64) + return; + + writeq(INTR_MASK(vfs - 64), pfdev->reg_base + RVU_PF_VFME_INT_ENA_W1CX(1)= ); + irq =3D pci_irq_vector(pfdev->pdev, RVU_PF_INT_VEC_VFME1); + free_irq(irq, pfdev); + + writeq(INTR_MASK(vfs - 64), pfdev->reg_base + RVU_PF_VFFLR_INT_ENA_W1CX(1= )); + irq =3D pci_irq_vector(pfdev->pdev, RVU_PF_INT_VEC_VFFLR1); + free_irq(irq, pfdev); +} + +static int rvu_gen_pf_flr_init(struct gen_pf_dev *pfdev, int num_vfs) +{ + int vf; + + pfdev->flr_wq =3D alloc_ordered_workqueue("otx2_pf_flr_wq", WQ_HIGHPRI); + if (!pfdev->flr_wq) + return -ENOMEM; + + pfdev->flr_wrk =3D devm_kcalloc(pfdev->dev, num_vfs, + sizeof(struct flr_work), GFP_KERNEL); + if (!pfdev->flr_wrk) { + destroy_workqueue(pfdev->flr_wq); + return -ENOMEM; + } + + for (vf =3D 0; vf < num_vfs; vf++) { + pfdev->flr_wrk[vf].pfdev =3D pfdev; + INIT_WORK(&pfdev->flr_wrk[vf].work, rvu_gen_pf_flr_handler); + } + + return 0; +} + static int rvu_gen_pf_sriov_enable(struct pci_dev *pdev, int numvfs) { struct gen_pf_dev *pfdev =3D pci_get_drvdata(pdev); @@ -708,11 +922,25 @@ static int rvu_gen_pf_sriov_enable(struct pci_dev *pd= ev, int numvfs) if (ret) goto free_mbox; =20 + ret =3D rvu_gen_pf_flr_init(pfdev, numvfs); + if (ret) + goto free_intr; + + ret =3D rvu_gen_pf_register_flr_me_intr(pfdev, numvfs); + if (ret) + goto free_flr; + ret =3D pci_enable_sriov(pdev, numvfs); if (ret) - return ret; + goto free_flr_intr; =20 return numvfs; +free_flr_intr: + rvu_gen_pf_disable_flr_me_intr(pfdev); +free_flr: + rvu_gen_pf_flr_wq_destroy(pfdev); +free_intr: + rvu_gen_pf_disable_pfvf_mbox_intr(pfdev, numvfs); free_mbox: rvu_gen_pf_pfvf_mbox_destroy(pfdev); return ret; @@ -728,6 +956,8 @@ static int rvu_gen_pf_sriov_disable(struct pci_dev *pde= v) =20 pci_disable_sriov(pdev); =20 + rvu_gen_pf_disable_flr_me_intr(pfdev); + rvu_gen_pf_flr_wq_destroy(pfdev); rvu_gen_pf_disable_pfvf_mbox_intr(pfdev, numvfs); rvu_gen_pf_pfvf_mbox_destroy(pfdev); =20 diff --git a/drivers/soc/marvell/rvu_gen_pf/gen_pf.h b/drivers/soc/marvell/= rvu_gen_pf/gen_pf.h index ad651b97b661..7aacb84df07a 100644 --- a/drivers/soc/marvell/rvu_gen_pf/gen_pf.h +++ b/drivers/soc/marvell/rvu_gen_pf/gen_pf.h @@ -16,6 +16,11 @@ =20 struct gen_pf_dev; =20 +struct flr_work { + struct work_struct work; + struct gen_pf_dev *pfdev; +}; + struct mbox { struct otx2_mbox mbox; struct work_struct mbox_wrk; @@ -33,6 +38,8 @@ struct gen_pf_dev { struct device *dev; void __iomem *reg_base; char *irq_name; + struct workqueue_struct *flr_wq; + struct flr_work *flr_wrk; struct work_struct mbox_wrk; struct work_struct mbox_wrk_up; =20 --=20 2.34.1