From nobody Tue Nov 26 04:26:12 2024 Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 61067143723 for ; Tue, 22 Oct 2024 05:54:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=67.231.156.173 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729576453; cv=none; b=VG5zMw/BeUDdAMeebvnRO8sF1xVyI3N5HuXUYnFQ3EH/0dDSe+VidYfCGeRQE6bKcsghjO+cikGJ0wDVFxPs/kyBhSEOI2l6SD3d7n5xzP8rFuCP13iZT7kLMZObdwgaD/AfDCYs6FpPmbaJhfp/n1BiH1gpKHCEdWLu2Nkp1p8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729576453; c=relaxed/simple; bh=74f4isoWZyyE2mXKXkkGm1XHQ9OfWL5qHqvSNASgViE=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=dFUiilu2oDwNilHJRGREPkTgrfOfgmyxl9MfMj9Fi8Flf5/NKWr1T4vgYr8ssp5G+QfpZ0fvy/lhjPv0mo2Dcr1e4EgcvWzK+IZZ1xqmO5azbZ6V4+/VS/ZzdLMAbdBnyPeJfL+LQ8c8vq67rb1NvTmj6XWtnnr2P4PGMJZBmmg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com; spf=pass smtp.mailfrom=marvell.com; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b=NeTfQz6g; arc=none smtp.client-ip=67.231.156.173 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=marvell.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="NeTfQz6g" Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 49M4SklA018854; Mon, 21 Oct 2024 22:54:04 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=pfpt0220; bh=8 566T+bKYSeaqQnGjxMG55wiA7qY16JdZtotsTxO0KY=; b=NeTfQz6gUaQNeau66 iLFd/Js6spGDg6iYpPngR/fhixdSGmEAIAZ68HGrMKOVKGX1zGl2+DamFSXYd0QT FWLTwvfQUrp103ptv98kqG2C865ex6NK1RiwGa6lr3/+vCc/v6+SFqND7odHW1Fp x6sGeLjc+nOIcEd+CmGVnyuv5Z98HAQqV9kHT52EZLgdePgXELjnJe9BCoSWr2S9 Qf3Ex76tg42AuNogKg4LuZkJHpLbQHrgXlwOc5vvJs2ip7J/s5UwQq/aygwDDhoX yCXMgiFZG2VCpusZ0zY0BKmZ1KkDd0cpUY+x57Nh1UDIZ5JyutXjc9dU6pe0+qRP ER21A== Received: from dc5-exch05.marvell.com ([199.233.59.128]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 42e52904nr-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 21 Oct 2024 22:54:04 -0700 (PDT) Received: from DC5-EXCH05.marvell.com (10.69.176.209) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Mon, 21 Oct 2024 22:54:02 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Mon, 21 Oct 2024 22:54:02 -0700 Received: from localhost.localdomain (unknown [10.29.37.241]) by maili.marvell.com (Postfix) with ESMTP id 5CED13F7067; Mon, 21 Oct 2024 22:54:00 -0700 (PDT) From: Anshumali Gaur To: , , , , , , CC: Anshumali Gaur Subject: [PATCH v4 4/4] soc: marvell: rvu-pf: Handle function level reset (FLR) IRQs for VFs Date: Tue, 22 Oct 2024 11:23:45 +0530 Message-ID: <20241022055345.2983365-5-agaur@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20241022055345.2983365-1-agaur@marvell.com> References: <20241022055345.2983365-1-agaur@marvell.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-ORIG-GUID: MDdKFXyCKmdCytyl9ttJpKoBajm_eOgL X-Proofpoint-GUID: MDdKFXyCKmdCytyl9ttJpKoBajm_eOgL X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.60.29 definitions=2024-09-06_09,2024-09-06_01,2024-09-02_01 Content-Type: text/plain; charset="utf-8" Added PCIe FLR interrupt handler for VFs. When FLR is triggered for VFs, parent PF gets an interrupt. PF creates a mbox message and sends it to RVU Admin function (AF). AF cleans up all the resources attached to that specific VF and acks the PF that FLR is handled. Signed-off-by: Anshumali Gaur --- drivers/soc/marvell/rvu_gen_pf/gen_pf.c | 232 +++++++++++++++++++++++- drivers/soc/marvell/rvu_gen_pf/gen_pf.h | 7 + 2 files changed, 238 insertions(+), 1 deletion(-) diff --git a/drivers/soc/marvell/rvu_gen_pf/gen_pf.c b/drivers/soc/marvell/= rvu_gen_pf/gen_pf.c index 027d54c182a5..d99f0064aaf8 100644 --- a/drivers/soc/marvell/rvu_gen_pf/gen_pf.c +++ b/drivers/soc/marvell/rvu_gen_pf/gen_pf.c @@ -626,6 +626,15 @@ static void rvu_gen_pf_queue_vf_work(struct mbox *mw, = struct workqueue_struct *m } } =20 +static void rvu_gen_pf_flr_wq_destroy(struct gen_pf_dev *pfdev) +{ + if (!pfdev->flr_wq) + return; + destroy_workqueue(pfdev->flr_wq); + pfdev->flr_wq =3D NULL; + devm_kfree(pfdev->dev, pfdev->flr_wrk); +} + static irqreturn_t rvu_gen_pf_pfvf_mbox_intr_handler(int irq, void *pf_irq) { struct gen_pf_dev *pfdev =3D (struct gen_pf_dev *)(pf_irq); @@ -699,6 +708,211 @@ static int rvu_gen_pf_register_pfvf_mbox_intr(struct = gen_pf_dev *pfdev, int numv return 0; } =20 +static void rvu_gen_pf_flr_handler(struct work_struct *work) +{ + struct flr_work *flrwork =3D container_of(work, struct flr_work, work); + struct gen_pf_dev *pfdev =3D flrwork->pfdev; + struct mbox *mbox =3D &pfdev->mbox; + struct msg_req *req; + int vf, reg =3D 0; + + vf =3D flrwork - pfdev->flr_wrk; + + mutex_lock(&mbox->lock); + req =3D gen_pf_mbox_alloc_msg_vf_flr(mbox); + if (!req) { + mutex_unlock(&mbox->lock); + return; + } + req->hdr.pcifunc &=3D ~RVU_PFVF_FUNC_MASK; + req->hdr.pcifunc |=3D (vf + 1) & RVU_PFVF_FUNC_MASK; + + if (!rvu_gen_pf_sync_mbox_msg(&pfdev->mbox)) { + if (vf >=3D 64) { + reg =3D 1; + vf =3D vf - 64; + } + /* clear transcation pending bit */ + writeq(BIT_ULL(vf), pfdev->reg_base + RVU_PF_VFTRPENDX(reg)); + writeq(BIT_ULL(vf), pfdev->reg_base + RVU_PF_VFFLR_INT_ENA_W1SX(reg)); + } + + mutex_unlock(&mbox->lock); +} + +static irqreturn_t rvu_gen_pf_me_intr_handler(int irq, void *pf_irq) +{ + struct gen_pf_dev *pfdev =3D (struct gen_pf_dev *)pf_irq; + int vf, reg, num_reg =3D 1; + u64 intr; + + if (pfdev->total_vfs > 64) + num_reg =3D 2; + + for (reg =3D 0; reg < num_reg; reg++) { + intr =3D readq(pfdev->reg_base + RVU_PF_VFME_INTX(reg)); + if (!intr) + continue; + for (vf =3D 0; vf < 64; vf++) { + if (!(intr & BIT_ULL(vf))) + continue; + /* clear trpend bit */ + writeq(BIT_ULL(vf), pfdev->reg_base + RVU_PF_VFTRPENDX(reg)); + /* clear interrupt */ + writeq(BIT_ULL(vf), pfdev->reg_base + RVU_PF_VFME_INTX(reg)); + } + } + return IRQ_HANDLED; +} + +static irqreturn_t rvu_gen_pf_flr_intr_handler(int irq, void *pf_irq) +{ + struct gen_pf_dev *pfdev =3D (struct gen_pf_dev *)pf_irq; + int reg, dev, vf, start_vf, num_reg =3D 1; + u64 intr; + + if (pfdev->total_vfs > 64) + num_reg =3D 2; + + for (reg =3D 0; reg < num_reg; reg++) { + intr =3D readq(pfdev->reg_base + RVU_PF_VFFLR_INTX(reg)); + if (!intr) + continue; + start_vf =3D 64 * reg; + for (vf =3D 0; vf < 64; vf++) { + if (!(intr & BIT_ULL(vf))) + continue; + dev =3D vf + start_vf; + queue_work(pfdev->flr_wq, &pfdev->flr_wrk[dev].work); + /* Clear interrupt */ + writeq(BIT_ULL(vf), pfdev->reg_base + RVU_PF_VFFLR_INTX(reg)); + /* Disable the interrupt */ + writeq(BIT_ULL(vf), pfdev->reg_base + RVU_PF_VFFLR_INT_ENA_W1CX(reg)); + } + } + return IRQ_HANDLED; +} + +static int rvu_gen_pf_register_flr_me_intr(struct gen_pf_dev *pfdev, int n= umvfs) +{ + char *irq_name; + int ret; + + /* Register ME interrupt handler*/ + irq_name =3D &pfdev->irq_name[RVU_PF_INT_VEC_VFME0 * NAME_SIZE]; + snprintf(irq_name, NAME_SIZE, "Generic RVUPF%d_ME0", rvu_get_pf(pfdev->pc= ifunc)); + ret =3D request_irq(pci_irq_vector(pfdev->pdev, RVU_PF_INT_VEC_VFME0), + rvu_gen_pf_me_intr_handler, 0, irq_name, pfdev); + + if (ret) { + dev_err(pfdev->dev, + "Generic RVUPF: IRQ registration failed for ME0\n"); + } + + /* Register FLR interrupt handler */ + irq_name =3D &pfdev->irq_name[RVU_PF_INT_VEC_VFFLR0 * NAME_SIZE]; + snprintf(irq_name, NAME_SIZE, "Generic RVUPF%d_FLR0", rvu_get_pf(pfdev->p= cifunc)); + ret =3D request_irq(pci_irq_vector(pfdev->pdev, RVU_PF_INT_VEC_VFFLR0), + rvu_gen_pf_flr_intr_handler, 0, irq_name, pfdev); + if (ret) { + dev_err(pfdev->dev, + "Generic RVUPF: IRQ registration failed for FLR0\n"); + return ret; + } + + if (numvfs > 64) { + irq_name =3D &pfdev->irq_name[RVU_PF_INT_VEC_VFME1 * NAME_SIZE]; + snprintf(irq_name, NAME_SIZE, "Generic RVUPF%d_ME1", + rvu_get_pf(pfdev->pcifunc)); + ret =3D request_irq(pci_irq_vector + (pfdev->pdev, RVU_PF_INT_VEC_VFME1), + rvu_gen_pf_me_intr_handler, 0, irq_name, pfdev); + if (ret) { + dev_err(pfdev->dev, + "Generic RVUPF: IRQ registration failed for ME1\n"); + } + irq_name =3D &pfdev->irq_name[RVU_PF_INT_VEC_VFFLR1 * NAME_SIZE]; + snprintf(irq_name, NAME_SIZE, "Generic RVUPF%d_FLR1", + rvu_get_pf(pfdev->pcifunc)); + ret =3D request_irq(pci_irq_vector + (pfdev->pdev, RVU_PF_INT_VEC_VFFLR1), + rvu_gen_pf_flr_intr_handler, 0, irq_name, pfdev); + if (ret) { + dev_err(pfdev->dev, + "Generic RVUPF: IRQ registration failed for FLR1\n"); + return ret; + } + } + + /* Enable ME interrupt for all VFs*/ + writeq(INTR_MASK(numvfs), pfdev->reg_base + RVU_PF_VFME_INTX(0)); + writeq(INTR_MASK(numvfs), pfdev->reg_base + RVU_PF_VFME_INT_ENA_W1SX(0)); + + /* Enable FLR interrupt for all VFs*/ + writeq(INTR_MASK(numvfs), pfdev->reg_base + RVU_PF_VFFLR_INTX(0)); + writeq(INTR_MASK(numvfs), pfdev->reg_base + RVU_PF_VFFLR_INT_ENA_W1SX(0)); + + if (numvfs > 64) { + numvfs -=3D 64; + + writeq(INTR_MASK(numvfs), pfdev->reg_base + RVU_PF_VFME_INTX(1)); + writeq(INTR_MASK(numvfs), pfdev->reg_base + RVU_PF_VFME_INT_ENA_W1SX(1)); + + writeq(INTR_MASK(numvfs), pfdev->reg_base + RVU_PF_VFFLR_INTX(1)); + writeq(INTR_MASK(numvfs), pfdev->reg_base + RVU_PF_VFFLR_INT_ENA_W1SX(1)= ); + } + return 0; +} + +static void rvu_gen_pf_disable_flr_me_intr(struct gen_pf_dev *pfdev) +{ + int irq, vfs =3D pfdev->total_vfs; + + /* Disable VFs ME interrupts */ + writeq(INTR_MASK(vfs), pfdev->reg_base + RVU_PF_VFME_INT_ENA_W1CX(0)); + irq =3D pci_irq_vector(pfdev->pdev, RVU_PF_INT_VEC_VFME0); + free_irq(irq, pfdev); + + /* Disable VFs FLR interrupts */ + writeq(INTR_MASK(vfs), pfdev->reg_base + RVU_PF_VFFLR_INT_ENA_W1CX(0)); + irq =3D pci_irq_vector(pfdev->pdev, RVU_PF_INT_VEC_VFFLR0); + free_irq(irq, pfdev); + + if (vfs <=3D 64) + return; + + writeq(INTR_MASK(vfs - 64), pfdev->reg_base + RVU_PF_VFME_INT_ENA_W1CX(1)= ); + irq =3D pci_irq_vector(pfdev->pdev, RVU_PF_INT_VEC_VFME1); + free_irq(irq, pfdev); + + writeq(INTR_MASK(vfs - 64), pfdev->reg_base + RVU_PF_VFFLR_INT_ENA_W1CX(1= )); + irq =3D pci_irq_vector(pfdev->pdev, RVU_PF_INT_VEC_VFFLR1); + free_irq(irq, pfdev); +} + +static int rvu_gen_pf_flr_init(struct gen_pf_dev *pfdev, int num_vfs) +{ + int vf; + + pfdev->flr_wq =3D alloc_ordered_workqueue("otx2_pf_flr_wq", WQ_HIGHPRI); + if (!pfdev->flr_wq) + return -ENOMEM; + + pfdev->flr_wrk =3D devm_kcalloc(pfdev->dev, num_vfs, + sizeof(struct flr_work), GFP_KERNEL); + if (!pfdev->flr_wrk) { + destroy_workqueue(pfdev->flr_wq); + return -ENOMEM; + } + + for (vf =3D 0; vf < num_vfs; vf++) { + pfdev->flr_wrk[vf].pfdev =3D pfdev; + INIT_WORK(&pfdev->flr_wrk[vf].work, rvu_gen_pf_flr_handler); + } + + return 0; +} + static int rvu_gen_pf_sriov_enable(struct pci_dev *pdev, int numvfs) { struct gen_pf_dev *pfdev =3D pci_get_drvdata(pdev); @@ -713,11 +927,25 @@ static int rvu_gen_pf_sriov_enable(struct pci_dev *pd= ev, int numvfs) if (ret) goto free_mbox; =20 + ret =3D rvu_gen_pf_flr_init(pfdev, numvfs); + if (ret) + goto free_intr; + + ret =3D rvu_gen_pf_register_flr_me_intr(pfdev, numvfs); + if (ret) + goto free_flr; + ret =3D pci_enable_sriov(pdev, numvfs); if (ret) - return ret; + goto free_flr_intr; =20 return numvfs; +free_flr_intr: + rvu_gen_pf_disable_flr_me_intr(pfdev); +free_flr: + rvu_gen_pf_flr_wq_destroy(pfdev); +free_intr: + rvu_gen_pf_disable_pfvf_mbox_intr(pfdev, numvfs); free_mbox: rvu_gen_pf_pfvf_mbox_destroy(pfdev); return ret; @@ -733,6 +961,8 @@ static int rvu_gen_pf_sriov_disable(struct pci_dev *pde= v) =20 pci_disable_sriov(pdev); =20 + rvu_gen_pf_disable_flr_me_intr(pfdev); + rvu_gen_pf_flr_wq_destroy(pfdev); rvu_gen_pf_disable_pfvf_mbox_intr(pfdev, numvfs); rvu_gen_pf_pfvf_mbox_destroy(pfdev); =20 diff --git a/drivers/soc/marvell/rvu_gen_pf/gen_pf.h b/drivers/soc/marvell/= rvu_gen_pf/gen_pf.h index ad651b97b661..7aacb84df07a 100644 --- a/drivers/soc/marvell/rvu_gen_pf/gen_pf.h +++ b/drivers/soc/marvell/rvu_gen_pf/gen_pf.h @@ -16,6 +16,11 @@ =20 struct gen_pf_dev; =20 +struct flr_work { + struct work_struct work; + struct gen_pf_dev *pfdev; +}; + struct mbox { struct otx2_mbox mbox; struct work_struct mbox_wrk; @@ -33,6 +38,8 @@ struct gen_pf_dev { struct device *dev; void __iomem *reg_base; char *irq_name; + struct workqueue_struct *flr_wq; + struct flr_work *flr_wrk; struct work_struct mbox_wrk; struct work_struct mbox_wrk_up; =20 --=20 2.25.1