From nobody Tue Nov 26 04:44:32 2024 Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E35E713E8AE for ; Tue, 22 Oct 2024 05:54:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=67.231.156.173 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729576450; cv=none; b=UCLqwMj90tt4GZe5OwYahXygN05aklywvH2gm81smhzmibdvR8ph9ImaS65+UDvmkCOkWBasPf2gccPw5dHfEPbJCxpxEzfB07kBWvDh2FU57i+HGZ3/QHk11Oky2lvWVa+wnI4IIiaxMZ+OEpbS/iv7d4JXNTHgv+DDRhQfPUI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1729576450; c=relaxed/simple; bh=2Dm6fNIoHSqvb5aUFRzM5RtTFtOC39Z7RmOLOTyP9BU=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=fH74x41ZGc7lWmGewSQXarYW4C64f/CXFtudaOKWg0Jgkq60IRvAeAxsv9Fn4NCZnTRw39X1/xccS7pIDndz3noN+P4kkIXm4NaXDnGwHDmn/XrFgqEJdPTVnC/YukklsR7bju0oOZx7enRL5NDH1oUMHOE1MoFfnQ09i4lNbo0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com; spf=pass smtp.mailfrom=marvell.com; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b=jpjlElqD; arc=none smtp.client-ip=67.231.156.173 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=marvell.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=marvell.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="jpjlElqD" Received: from pps.filterd (m0431383.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 49M3T28x003753; Mon, 21 Oct 2024 22:54:01 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=pfpt0220; bh=c GN4Ec1qv1SUqGsEIoo9XKQPSmXvQeH4AOhj6rNLml8=; b=jpjlElqD6GhX3Jdi7 vec0w7gNUmsTfRvwudCPB32LCqMHNShQnSuD6RqBin9pTkCw5V58aPe6vthiiLYS PWd2MJpsLrRpimC1bgbRMrSOb76BZ2zQKRrj9NM8DOtpLdvaK731C5+t2oL8d9R7 b3W7h2vuRu9qZtAaQTuP1niUYGwa2+SLTpht9VpuSABQIZ3ER8dBXam7l+J2nTeN NKOcMU3HMYG9vVJd+pKrI1QXgQEzHUnhhiiONkzJTbVmGTBDxCG7exoXWIGuqT51 FovYddwbx9KiNdl5oS875lRQl7s48ZlPeXGCeLBI3lUo42o5Nu/Vz7zST6Wq2t15 QnGJQ== Received: from dc6wp-exch02.marvell.com ([4.21.29.225]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 42e1ajrjs4-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 21 Oct 2024 22:54:00 -0700 (PDT) Received: from DC6WP-EXCH02.marvell.com (10.76.176.209) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Mon, 21 Oct 2024 22:54:00 -0700 Received: from maili.marvell.com (10.69.176.80) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Mon, 21 Oct 2024 22:54:00 -0700 Received: from localhost.localdomain (unknown [10.29.37.241]) by maili.marvell.com (Postfix) with ESMTP id 175B33F7067; Mon, 21 Oct 2024 22:53:56 -0700 (PDT) From: Anshumali Gaur To: , , , , , , CC: Anshumali Gaur Subject: [PATCH v4 3/4] soc: marvell: rvu-pf: Add mailbox communication btw RVU VFs and PF. Date: Tue, 22 Oct 2024 11:23:44 +0530 Message-ID: <20241022055345.2983365-4-agaur@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20241022055345.2983365-1-agaur@marvell.com> References: <20241022055345.2983365-1-agaur@marvell.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-GUID: 705LbRrlTVm3_i-PAeZgs7UgY7xnlbLF X-Proofpoint-ORIG-GUID: 705LbRrlTVm3_i-PAeZgs7UgY7xnlbLF X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.687,Hydra:6.0.235,FMLib:17.0.607.475 definitions=2020-10-13_15,2020-10-13_02,2020-04-07_01 Content-Type: text/plain; charset="utf-8" RVU PF shares a dedicated memory region with each of it's VFs. This memory region is used to establish communication between them. Since Admin function (AF) handles resource management, PF doesn't process the messages sent by VFs. It acts as an intermediary device process the messages sent by VFs. It acts as an intermediary device. Hardware doesn't support direct communication between AF and VFs. Signed-off-by: Anshumali Gaur --- drivers/soc/marvell/rvu_gen_pf/gen_pf.c | 442 ++++++++++++++++++++++++ drivers/soc/marvell/rvu_gen_pf/gen_pf.h | 2 + 2 files changed, 444 insertions(+) diff --git a/drivers/soc/marvell/rvu_gen_pf/gen_pf.c b/drivers/soc/marvell/= rvu_gen_pf/gen_pf.c index a03fc3f16c69..027d54c182a5 100644 --- a/drivers/soc/marvell/rvu_gen_pf/gen_pf.c +++ b/drivers/soc/marvell/rvu_gen_pf/gen_pf.c @@ -31,6 +31,11 @@ MODULE_LICENSE("GPL"); MODULE_DESCRIPTION("Marvell Octeon RVU Generic PF Driver"); MODULE_DEVICE_TABLE(pci, rvu_gen_pf_id_table); =20 +inline int rvu_get_pf(u16 pcifunc) +{ + return (pcifunc >> RVU_PFVF_PF_SHIFT) & RVU_PFVF_PF_MASK; +} + static int rvu_gen_pf_check_pf_usable(struct gen_pf_dev *pfdev) { u64 rev; @@ -50,6 +55,120 @@ static int rvu_gen_pf_check_pf_usable(struct gen_pf_dev= *pfdev) return 0; } =20 +static void rvu_gen_pf_forward_msg_pfvf(struct otx2_mbox_dev *mdev, + struct otx2_mbox *pfvf_mbox, void *bbuf_base, + int devid) +{ + struct otx2_mbox_dev *src_mdev =3D mdev; + int offset; + + /* Msgs are already copied, trigger VF's mbox irq */ + smp_wmb(); + + otx2_mbox_wait_for_zero(pfvf_mbox, devid); + offset =3D pfvf_mbox->trigger | (devid << pfvf_mbox->tr_shift); + writeq(MBOX_DOWN_MSG, (void __iomem *)pfvf_mbox->reg_base + offset); + + /* Restore VF's mbox bounce buffer region address */ + src_mdev->mbase =3D bbuf_base; +} + +static int rvu_gen_pf_forward_vf_mbox_msgs(struct gen_pf_dev *pfdev, + struct otx2_mbox *src_mbox, + int dir, int vf, int num_msgs) +{ + struct otx2_mbox_dev *src_mdev, *dst_mdev; + struct mbox_hdr *mbox_hdr; + struct mbox_hdr *req_hdr; + struct mbox *dst_mbox; + int dst_size, err; + + if (dir =3D=3D MBOX_DIR_PFAF) { + /* + * Set VF's mailbox memory as PF's bounce buffer memory, so + * that explicit copying of VF's msgs to PF=3D>AF mbox region + * and AF=3D>PF responses to VF's mbox region can be avoided. + */ + src_mdev =3D &src_mbox->dev[vf]; + mbox_hdr =3D src_mbox->hwbase + + src_mbox->rx_start + (vf * MBOX_SIZE); + + dst_mbox =3D &pfdev->mbox; + dst_size =3D dst_mbox->mbox.tx_size - + ALIGN(sizeof(*mbox_hdr), MBOX_MSG_ALIGN); + /* Check if msgs fit into destination area and has valid size */ + if (mbox_hdr->msg_size > dst_size || !mbox_hdr->msg_size) + return -EINVAL; + + dst_mdev =3D &dst_mbox->mbox.dev[0]; + + mutex_lock(&pfdev->mbox.lock); + dst_mdev->mbase =3D src_mdev->mbase; + dst_mdev->msg_size =3D mbox_hdr->msg_size; + dst_mdev->num_msgs =3D num_msgs; + err =3D rvu_gen_pf_sync_mbox_msg(dst_mbox); + /* + * Error code -EIO indicate there is a communication failure + * to the AF. Rest of the error codes indicate that AF processed + * VF messages and set the error codes in response messages + * (if any) so simply forward responses to VF. + */ + if (err =3D=3D -EIO) { + dev_warn(pfdev->dev, + "AF not responding to VF%d messages\n", vf); + /* restore PF mbase and exit */ + dst_mdev->mbase =3D pfdev->mbox.bbuf_base; + mutex_unlock(&pfdev->mbox.lock); + return err; + } + /* + * At this point, all the VF messages sent to AF are acked + * with proper responses and responses are copied to VF + * mailbox hence raise interrupt to VF. + */ + req_hdr =3D (struct mbox_hdr *)(dst_mdev->mbase + + dst_mbox->mbox.rx_start); + req_hdr->num_msgs =3D num_msgs; + + rvu_gen_pf_forward_msg_pfvf(dst_mdev, &pfdev->mbox_pfvf[0].mbox, + pfdev->mbox.bbuf_base, vf); + mutex_unlock(&pfdev->mbox.lock); + } else if (dir =3D=3D MBOX_DIR_PFVF_UP) { + src_mdev =3D &src_mbox->dev[0]; + mbox_hdr =3D src_mbox->hwbase + src_mbox->rx_start; + req_hdr =3D (struct mbox_hdr *)(src_mdev->mbase + + src_mbox->rx_start); + req_hdr->num_msgs =3D num_msgs; + + dst_mbox =3D &pfdev->mbox_pfvf[0]; + dst_size =3D dst_mbox->mbox_up.tx_size - + ALIGN(sizeof(*mbox_hdr), MBOX_MSG_ALIGN); + /* Check if msgs fit into destination area */ + if (mbox_hdr->msg_size > dst_size) + return -EINVAL; + dst_mdev =3D &dst_mbox->mbox_up.dev[vf]; + dst_mdev->mbase =3D src_mdev->mbase; + dst_mdev->msg_size =3D mbox_hdr->msg_size; + dst_mdev->num_msgs =3D mbox_hdr->num_msgs; + err =3D rvu_gen_pf_sync_mbox_up_msg(dst_mbox, vf); + if (err) { + dev_warn(pfdev->dev, + "VF%d is not responding to mailbox\n", vf); + return err; + } + } else if (dir =3D=3D MBOX_DIR_VFPF_UP) { + req_hdr =3D (struct mbox_hdr *)(src_mbox->dev[0].mbase + + src_mbox->rx_start); + req_hdr->num_msgs =3D num_msgs; + rvu_gen_pf_forward_msg_pfvf(&pfdev->mbox_pfvf->mbox_up.dev[vf], + &pfdev->mbox.mbox_up, + pfdev->mbox_pfvf[vf].bbuf_base, + 0); + } + + return 0; +} + static irqreturn_t rvu_gen_pf_pfaf_mbox_intr_handler(int irq, void *pf_irq) { struct gen_pf_dev *pfdev =3D (struct gen_pf_dev *)pf_irq; @@ -192,6 +311,39 @@ static void rvu_gen_pf_process_pfaf_mbox_msg(struct ge= n_pf_dev *pfdev, } } =20 +static void rvu_gen_pf_pfaf_mbox_up_handler(struct work_struct *work) +{ + struct mbox *af_mbox =3D container_of(work, struct mbox, mbox_up_wrk); + struct otx2_mbox *mbox =3D &af_mbox->mbox_up; + struct otx2_mbox_dev *mdev =3D &mbox->dev[0]; + struct gen_pf_dev *pfdev =3D af_mbox->pfvf; + int offset, id, devid =3D 0; + struct mbox_hdr *rsp_hdr; + struct mbox_msghdr *msg; + u16 num_msgs; + + rsp_hdr =3D (struct mbox_hdr *)(mdev->mbase + mbox->rx_start); + num_msgs =3D rsp_hdr->num_msgs; + + offset =3D mbox->rx_start + ALIGN(sizeof(*rsp_hdr), MBOX_MSG_ALIGN); + + for (id =3D 0; id < num_msgs; id++) { + msg =3D (struct mbox_msghdr *)(mdev->mbase + offset); + + devid =3D msg->pcifunc & RVU_PFVF_FUNC_MASK; + offset =3D mbox->rx_start + msg->next_msgoff; + } + /* Forward to VF iff VFs are really present */ + if (devid && pci_num_vf(pfdev->pdev)) { + rvu_gen_pf_forward_vf_mbox_msgs(pfdev, &pfdev->mbox.mbox_up, + MBOX_DIR_PFVF_UP, devid - 1, + num_msgs); + return; + } + + otx2_mbox_msg_send(mbox, 0); +} + static void rvu_gen_pf_pfaf_mbox_handler(struct work_struct *work) { struct otx2_mbox_dev *mdev; @@ -266,6 +418,7 @@ static int rvu_gen_pf_pfaf_mbox_init(struct gen_pf_dev = *pfdev) goto exit; =20 INIT_WORK(&mbox->mbox_wrk, rvu_gen_pf_pfaf_mbox_handler); + INIT_WORK(&mbox->mbox_up_wrk, rvu_gen_pf_pfaf_mbox_up_handler); mutex_init(&mbox->lock); =20 return 0; @@ -274,19 +427,305 @@ static int rvu_gen_pf_pfaf_mbox_init(struct gen_pf_d= ev *pfdev) return err; } =20 +static void rvu_gen_pf_pfvf_mbox_handler(struct work_struct *work) +{ + struct mbox_msghdr *msg =3D NULL; + int offset, vf_idx, id, err; + struct otx2_mbox_dev *mdev; + struct gen_pf_dev *pfdev; + struct mbox_hdr *req_hdr; + struct otx2_mbox *mbox; + struct mbox *vf_mbox; + + vf_mbox =3D container_of(work, struct mbox, mbox_wrk); + pfdev =3D vf_mbox->pfvf; + vf_idx =3D vf_mbox - pfdev->mbox_pfvf; + + mbox =3D &pfdev->mbox_pfvf[0].mbox; + mdev =3D &mbox->dev[vf_idx]; + req_hdr =3D (struct mbox_hdr *)(mdev->mbase + mbox->rx_start); + + offset =3D ALIGN(sizeof(*req_hdr), MBOX_MSG_ALIGN); + + for (id =3D 0; id < vf_mbox->num_msgs; id++) { + msg =3D (struct mbox_msghdr *)(mdev->mbase + mbox->rx_start + + offset); + + if (msg->sig !=3D OTX2_MBOX_REQ_SIG) + goto inval_msg; + + /* Set VF's number in each of the msg */ + msg->pcifunc &=3D ~RVU_PFVF_FUNC_MASK; + msg->pcifunc |=3D (vf_idx + 1) & RVU_PFVF_FUNC_MASK; + offset =3D msg->next_msgoff; + } + err =3D rvu_gen_pf_forward_vf_mbox_msgs(pfdev, mbox, MBOX_DIR_PFAF, vf_id= x, + vf_mbox->num_msgs); + if (err) + goto inval_msg; + return; + +inval_msg: + if (!msg) + return; + + otx2_reply_invalid_msg(mbox, vf_idx, 0, msg->id); + otx2_mbox_msg_send(mbox, vf_idx); +} + +static int rvu_gen_pf_pfvf_mbox_init(struct gen_pf_dev *pfdev, int numvfs) +{ + void __iomem *hwbase; + struct mbox *mbox; + int err, vf; + u64 base; + + if (!numvfs) + return -EINVAL; + + pfdev->mbox_pfvf =3D devm_kcalloc(&pfdev->pdev->dev, numvfs, + sizeof(struct mbox), GFP_KERNEL); + + if (!pfdev->mbox_pfvf) + return -ENOMEM; + + pfdev->mbox_pfvf_wq =3D alloc_workqueue("otx2_pfvf_mailbox", + WQ_UNBOUND | WQ_HIGHPRI | + WQ_MEM_RECLAIM, 0); + if (!pfdev->mbox_pfvf_wq) + return -ENOMEM; + + /* + * PF <-> VF mailbox region follows after + * PF <-> AF mailbox region. + */ + base =3D pci_resource_start(pfdev->pdev, PCI_MBOX_BAR_NUM) + MBOX_SIZE; + + hwbase =3D ioremap_wc(base, MBOX_SIZE * pfdev->total_vfs); + if (!hwbase) { + err =3D -ENOMEM; + goto free_wq; + } + + mbox =3D &pfdev->mbox_pfvf[0]; + err =3D otx2_mbox_init(&mbox->mbox, hwbase, pfdev->pdev, pfdev->reg_base, + MBOX_DIR_PFVF, numvfs); + if (err) + goto free_iomem; + + err =3D otx2_mbox_init(&mbox->mbox_up, hwbase, pfdev->pdev, pfdev->reg_ba= se, + MBOX_DIR_PFVF_UP, numvfs); + if (err) + goto free_iomem; + + for (vf =3D 0; vf < numvfs; vf++) { + mbox->pfvf =3D pfdev; + INIT_WORK(&mbox->mbox_wrk, rvu_gen_pf_pfvf_mbox_handler); + mbox++; + } + + return 0; + +free_iomem: + if (hwbase) + iounmap(hwbase); +free_wq: + destroy_workqueue(pfdev->mbox_pfvf_wq); + return err; +} + +static void rvu_gen_pf_pfvf_mbox_destroy(struct gen_pf_dev *pfdev) +{ + struct mbox *mbox =3D &pfdev->mbox_pfvf[0]; + + if (!mbox) + return; + + if (pfdev->mbox_pfvf_wq) { + destroy_workqueue(pfdev->mbox_pfvf_wq); + pfdev->mbox_pfvf_wq =3D NULL; + } + + if (mbox->mbox.hwbase) + iounmap((void __iomem *)mbox->mbox.hwbase); + + otx2_mbox_destroy(&mbox->mbox); +} + +static void rvu_gen_pf_enable_pfvf_mbox_intr(struct gen_pf_dev *pfdev, int= numvfs) +{ + /* Clear PF <=3D> VF mailbox IRQ */ + writeq(~0ull, pfdev->reg_base + RVU_PF_VFPF_MBOX_INTX(0)); + writeq(~0ull, pfdev->reg_base + RVU_PF_VFPF_MBOX_INTX(1)); + + /* Enable PF <=3D> VF mailbox IRQ */ + writeq(INTR_MASK(numvfs), pfdev->reg_base + RVU_PF_VFPF_MBOX_INT_ENA_W1SX= (0)); + if (numvfs > 64) { + numvfs -=3D 64; + writeq(INTR_MASK(numvfs), pfdev->reg_base + RVU_PF_VFPF_MBOX_INT_ENA_W1S= X(1)); + } +} + +static void rvu_gen_pf_disable_pfvf_mbox_intr(struct gen_pf_dev *pfdev, in= t numvfs) +{ + int vector; + + /* Disable PF <=3D> VF mailbox IRQ */ + writeq(~0ull, pfdev->reg_base + RVU_PF_VFPF_MBOX_INT_ENA_W1CX(0)); + writeq(~0ull, pfdev->reg_base + RVU_PF_VFPF_MBOX_INT_ENA_W1CX(1)); + + writeq(~0ull, pfdev->reg_base + RVU_PF_VFPF_MBOX_INTX(0)); + vector =3D pci_irq_vector(pfdev->pdev, RVU_PF_INT_VEC_VFPF_MBOX0); + free_irq(vector, pfdev); + + if (numvfs > 64) { + writeq(~0ull, pfdev->reg_base + RVU_PF_VFPF_MBOX_INTX(1)); + vector =3D pci_irq_vector(pfdev->pdev, RVU_PF_INT_VEC_VFPF_MBOX1); + free_irq(vector, pfdev); + } +} + +static void rvu_gen_pf_queue_vf_work(struct mbox *mw, struct workqueue_str= uct *mbox_wq, + int first, int mdevs, u64 intr) +{ + struct otx2_mbox_dev *mdev; + struct otx2_mbox *mbox; + struct mbox_hdr *hdr; + int i; + + for (i =3D first; i < mdevs; i++) { + /* start from 0 */ + if (!(intr & BIT_ULL(i - first))) + continue; + + mbox =3D &mw->mbox; + mdev =3D &mbox->dev[i]; + hdr =3D mdev->mbase + mbox->rx_start; + /* + * The hdr->num_msgs is set to zero immediately in the interrupt + * handler to ensure that it holds a correct value next time + * when the interrupt handler is called. pf->mw[i].num_msgs + * holds the data for use in otx2_pfvf_mbox_handler and + * pf->mw[i].up_num_msgs holds the data for use in + * otx2_pfvf_mbox_up_handler. + */ + if (hdr->num_msgs) { + mw[i].num_msgs =3D hdr->num_msgs; + hdr->num_msgs =3D 0; + queue_work(mbox_wq, &mw[i].mbox_wrk); + } + + mbox =3D &mw->mbox_up; + mdev =3D &mbox->dev[i]; + hdr =3D mdev->mbase + mbox->rx_start; + if (hdr->num_msgs) { + mw[i].up_num_msgs =3D hdr->num_msgs; + hdr->num_msgs =3D 0; + queue_work(mbox_wq, &mw[i].mbox_up_wrk); + } + } +} + +static irqreturn_t rvu_gen_pf_pfvf_mbox_intr_handler(int irq, void *pf_irq) +{ + struct gen_pf_dev *pfdev =3D (struct gen_pf_dev *)(pf_irq); + int vfs =3D pfdev->total_vfs; + struct mbox *mbox; + u64 intr; + + mbox =3D pfdev->mbox_pfvf; + /* Handle VF interrupts */ + if (vfs > 64) { + intr =3D readq(pfdev->reg_base + RVU_PF_VFPF_MBOX_INTX(1)); + writeq(intr, pfdev->reg_base + RVU_PF_VFPF_MBOX_INTX(1)); + rvu_gen_pf_queue_vf_work(mbox, pfdev->mbox_pfvf_wq, 64, vfs, intr); + if (intr) + trace_otx2_msg_interrupt(mbox->mbox.pdev, "VF(s) to PF", intr); + vfs =3D 64; + } + + intr =3D readq(pfdev->reg_base + RVU_PF_VFPF_MBOX_INTX(0)); + writeq(intr, pfdev->reg_base + RVU_PF_VFPF_MBOX_INTX(0)); + + rvu_gen_pf_queue_vf_work(mbox, pfdev->mbox_pfvf_wq, 0, vfs, intr); + + if (intr) + trace_otx2_msg_interrupt(mbox->mbox.pdev, "VF(s) to PF", intr); + + return IRQ_HANDLED; +} + +static int rvu_gen_pf_register_pfvf_mbox_intr(struct gen_pf_dev *pfdev, in= t numvfs) +{ + char *irq_name; + int err; + + /* Register MBOX0 interrupt handler */ + irq_name =3D &pfdev->irq_name[RVU_PF_INT_VEC_VFPF_MBOX0 * NAME_SIZE]; + if (pfdev->pcifunc) + snprintf(irq_name, NAME_SIZE, + "Generic RVUPF%d_VF Mbox0", rvu_get_pf(pfdev->pcifunc)); + else + snprintf(irq_name, NAME_SIZE, "Generic RVUPF_VF Mbox0"); + err =3D request_irq(pci_irq_vector(pfdev->pdev, RVU_PF_INT_VEC_VFPF_MBOX0= ), + rvu_gen_pf_pfvf_mbox_intr_handler, 0, irq_name, pfdev); + if (err) { + dev_err(pfdev->dev, + "RVUPF: IRQ registration failed for PFVF mbox0 irq\n"); + return err; + } + + if (numvfs > 64) { + /* Register MBOX1 interrupt handler */ + irq_name =3D &pfdev->irq_name[RVU_PF_INT_VEC_VFPF_MBOX1 * NAME_SIZE]; + if (pfdev->pcifunc) + snprintf(irq_name, NAME_SIZE, + "Generic RVUPF%d_VF Mbox1", rvu_get_pf(pfdev->pcifunc)); + else + snprintf(irq_name, NAME_SIZE, "Generic RVUPF_VF Mbox1"); + err =3D request_irq(pci_irq_vector(pfdev->pdev, + RVU_PF_INT_VEC_VFPF_MBOX1), + rvu_gen_pf_pfvf_mbox_intr_handler, + 0, irq_name, pfdev); + if (err) { + dev_err(pfdev->dev, + "RVUPF: IRQ registration failed for PFVF mbox1 irq\n"); + return err; + } + } + + rvu_gen_pf_enable_pfvf_mbox_intr(pfdev, numvfs); + + return 0; +} + static int rvu_gen_pf_sriov_enable(struct pci_dev *pdev, int numvfs) { + struct gen_pf_dev *pfdev =3D pci_get_drvdata(pdev); int ret; =20 + /* Init PF <=3D> VF mailbox stuff */ + ret =3D rvu_gen_pf_pfvf_mbox_init(pfdev, numvfs); + if (ret) + return ret; + + ret =3D rvu_gen_pf_register_pfvf_mbox_intr(pfdev, numvfs); + if (ret) + goto free_mbox; + ret =3D pci_enable_sriov(pdev, numvfs); if (ret) return ret; =20 return numvfs; +free_mbox: + rvu_gen_pf_pfvf_mbox_destroy(pfdev); + return ret; } =20 static int rvu_gen_pf_sriov_disable(struct pci_dev *pdev) { + struct gen_pf_dev *pfdev =3D pci_get_drvdata(pdev); int numvfs =3D pci_num_vf(pdev); =20 if (!numvfs) @@ -294,6 +733,9 @@ static int rvu_gen_pf_sriov_disable(struct pci_dev *pde= v) =20 pci_disable_sriov(pdev); =20 + rvu_gen_pf_disable_pfvf_mbox_intr(pfdev, numvfs); + rvu_gen_pf_pfvf_mbox_destroy(pfdev); + return 0; } =20 diff --git a/drivers/soc/marvell/rvu_gen_pf/gen_pf.h b/drivers/soc/marvell/= rvu_gen_pf/gen_pf.h index 2019bea10ad0..ad651b97b661 100644 --- a/drivers/soc/marvell/rvu_gen_pf/gen_pf.h +++ b/drivers/soc/marvell/rvu_gen_pf/gen_pf.h @@ -38,7 +38,9 @@ struct gen_pf_dev { =20 /* Mbox */ struct mbox mbox; + struct mbox *mbox_pfvf; struct workqueue_struct *mbox_wq; + struct workqueue_struct *mbox_pfvf_wq; =20 int pf; u16 pcifunc; /* RVU PF_FUNC */ --=20 2.25.1