From nobody Mon Oct 6 13:41:18 2025 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AF29A2D9494; Tue, 22 Jul 2025 07:19:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.188 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753168756; cv=none; b=tg9gwf4/H13Bp01s65VZ6x7wHQDmUywXAAA38/My7W0SnInFpBxpauKVnT1shoIaznNCJQnf83guop+H1uOI/th6L/iOCA35NaWhD8jN1E4LLCjHuVzaUyT5gZS4op0gtBLKyjwikTKNmEPTXddhavX0KZxv6KllC/T9lfcVi1s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753168756; c=relaxed/simple; bh=2jnANS/eynBW4zrrssupobniEtLPgwdHhzKF+P8ZvhY=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=dSh2AoIW931gBk7Vv4XqwYVHEv8NdpXKN3m+DyaHXQH3Lm+VzJtPaSSs9PCNPBMKnSW+gBPW+RHcpXmxET3Wua6J5X236WUIL+EFyiEZV84a2tGAfhAztK8YP2IxzvgAOUoxf5PCO3mtkZzmmJlqTlpO3+YKZJciIShXdtBy7OY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.188 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.163.174]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4bmT6Q3M18zSjdr; Tue, 22 Jul 2025 15:14:38 +0800 (CST) Received: from kwepemf100013.china.huawei.com (unknown [7.202.181.12]) by mail.maildlp.com (Postfix) with ESMTPS id 0D0991402C4; Tue, 22 Jul 2025 15:19:10 +0800 (CST) Received: from DESKTOP-F6Q6J7K.china.huawei.com (10.174.175.220) by kwepemf100013.china.huawei.com (7.202.181.12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Tue, 22 Jul 2025 15:19:08 +0800 From: Fan Gong To: Fan Gong , Zhu Yikai CC: , , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Simon Horman , Andrew Lunn , , Jonathan Corbet , Bjorn Helgaas , luosifu , Xin Guo , Shen Chenyang , Zhou Shuai , Wu Like , Shi Jing , Fu Guiming , Meny Yossefi , Gur Stavi , Lee Trager , Michael Ellerman , Vadim Fedorenko , Suman Ghosh , Przemek Kitszel , Joe Damato , Christophe JAILLET Subject: [PATCH net-next v10 7/8] hinic3: Mailbox management interfaces Date: Tue, 22 Jul 2025 15:18:46 +0800 Message-ID: <463548c7cd0a6044f1dffa2b6fdef2f36c294356.1753152592.git.zhuyikai1@h-partners.com> X-Mailer: git-send-email 2.21.0.windows.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: kwepems500001.china.huawei.com (7.221.188.70) To kwepemf100013.china.huawei.com (7.202.181.12) Content-Type: text/plain; charset="utf-8" Add mailbox management interfaces initialization. It enables mailbox to communicate with event queues from HW. Co-developed-by: Xin Guo Signed-off-by: Xin Guo Co-developed-by: Zhu Yikai Signed-off-by: Zhu Yikai Signed-off-by: Fan Gong --- .../net/ethernet/huawei/hinic3/hinic3_mbox.c | 435 +++++++++++++++++- .../net/ethernet/huawei/hinic3/hinic3_mbox.h | 22 + .../huawei/hinic3/hinic3_queue_common.h | 1 + 3 files changed, 456 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_mbox.c b/drivers/net= /ethernet/huawei/hinic3/hinic3_mbox.c index 34c45962f83d..2871be08fdd0 100644 --- a/drivers/net/ethernet/huawei/hinic3/hinic3_mbox.c +++ b/drivers/net/ethernet/huawei/hinic3/hinic3_mbox.c @@ -9,6 +9,23 @@ #include "hinic3_hwif.h" #include "hinic3_mbox.h" =20 +#define MBOX_INT_DST_AEQN_MASK GENMASK(11, 10) +#define MBOX_INT_SRC_RESP_AEQN_MASK GENMASK(13, 12) +#define MBOX_INT_STAT_DMA_MASK GENMASK(19, 14) +/* TX size, expressed in 4 bytes units */ +#define MBOX_INT_TX_SIZE_MASK GENMASK(24, 20) +/* SO_RO =3D=3D strong order, relaxed order */ +#define MBOX_INT_STAT_DMA_SO_RO_MASK GENMASK(26, 25) +#define MBOX_INT_WB_EN_MASK BIT(28) +#define MBOX_INT_SET(val, field) \ + FIELD_PREP(MBOX_INT_##field##_MASK, val) + +#define MBOX_CTRL_TRIGGER_AEQE_MASK BIT(0) +#define MBOX_CTRL_TX_STATUS_MASK BIT(1) +#define MBOX_CTRL_DST_FUNC_MASK GENMASK(28, 16) +#define MBOX_CTRL_SET(val, field) \ + FIELD_PREP(MBOX_CTRL_##field##_MASK, val) + #define MBOX_MSG_POLLING_TIMEOUT_MS 8000 // send msg seg timeout #define MBOX_COMP_POLLING_TIMEOUT_MS 40000 // response =20 @@ -25,6 +42,20 @@ #define MBOX_LAST_SEG_MAX_LEN \ (MBOX_MAX_BUF_SZ - MBOX_SEQ_ID_MAX_VAL * MBOX_SEG_LEN) =20 +/* mbox write back status is 16B, only first 4B is used */ +#define MBOX_WB_STATUS_ERRCODE_MASK 0xFFFF +#define MBOX_WB_STATUS_MASK 0xFF +#define MBOX_WB_ERROR_CODE_MASK 0xFF00 +#define MBOX_WB_STATUS_FINISHED_SUCCESS 0xFF +#define MBOX_WB_STATUS_NOT_FINISHED 0x00 + +#define MBOX_STATUS_FINISHED(wb) \ + (((wb) & MBOX_WB_STATUS_MASK) !=3D MBOX_WB_STATUS_NOT_FINISHED) +#define MBOX_STATUS_SUCCESS(wb) \ + (((wb) & MBOX_WB_STATUS_MASK) =3D=3D MBOX_WB_STATUS_FINISHED_SUCCESS) +#define MBOX_STATUS_ERRCODE(wb) \ + ((wb) & MBOX_WB_ERROR_CODE_MASK) + #define MBOX_DMA_MSG_QUEUE_DEPTH 32 #define MBOX_BODY_FROM_HDR(header) ((u8 *)(header) + MBOX_HEADER_SZ) #define MBOX_AREA(hwif) \ @@ -410,9 +441,409 @@ void hinic3_free_mbox(struct hinic3_hwdev *hwdev) kfree(mbox); } =20 +#define MBOX_DMA_MSG_INIT_XOR_VAL 0x5a5a5a5a +#define MBOX_XOR_DATA_ALIGN 4 +static u32 mbox_dma_msg_xor(u32 *data, u32 msg_len) +{ + u32 xor =3D MBOX_DMA_MSG_INIT_XOR_VAL; + u32 dw_len =3D msg_len / sizeof(u32); + u32 i; + + for (i =3D 0; i < dw_len; i++) + xor ^=3D data[i]; + + return xor; +} + +#define MBOX_MQ_ID_MASK(mq, idx) ((idx) & ((mq)->depth - 1)) + +static bool is_msg_queue_full(struct mbox_dma_queue *mq) +{ + return (MBOX_MQ_ID_MASK(mq, (mq)->prod_idx + 1) =3D=3D + MBOX_MQ_ID_MASK(mq, (mq)->cons_idx)); +} + +static int mbox_prepare_dma_entry(struct hinic3_mbox *mbox, + struct mbox_dma_queue *mq, + struct mbox_dma_msg *dma_msg, + const void *msg, u32 msg_len) +{ + u64 dma_addr, offset; + void *dma_vaddr; + + if (is_msg_queue_full(mq)) { + dev_err(mbox->hwdev->dev, "Mbox sync message queue is busy, pi: %u, ci: = %u\n", + mq->prod_idx, MBOX_MQ_ID_MASK(mq, mq->cons_idx)); + return -EBUSY; + } + + /* copy data to DMA buffer */ + offset =3D mq->prod_idx * MBOX_MAX_BUF_SZ; + dma_vaddr =3D (u8 *)mq->dma_buf_vaddr + offset; + memcpy(dma_vaddr, msg, msg_len); + dma_addr =3D mq->dma_buf_paddr + offset; + dma_msg->dma_addr_high =3D upper_32_bits(dma_addr); + dma_msg->dma_addr_low =3D lower_32_bits(dma_addr); + dma_msg->msg_len =3D msg_len; + /* The firmware obtains message based on 4B alignment. */ + dma_msg->xor =3D mbox_dma_msg_xor(dma_vaddr, + ALIGN(msg_len, MBOX_XOR_DATA_ALIGN)); + mq->prod_idx++; + mq->prod_idx =3D MBOX_MQ_ID_MASK(mq, mq->prod_idx); + + return 0; +} + +static int mbox_prepare_dma_msg(struct hinic3_mbox *mbox, + enum mbox_msg_ack_type ack_type, + struct mbox_dma_msg *dma_msg, const void *msg, + u32 msg_len) +{ + struct mbox_dma_queue *mq; + u32 val; + + val =3D hinic3_hwif_read_reg(mbox->hwdev->hwif, MBOX_MQ_CI_OFFSET); + if (ack_type =3D=3D MBOX_MSG_ACK) { + mq =3D &mbox->sync_msg_queue; + mq->cons_idx =3D MBOX_MQ_CI_GET(val, SYNC); + } else { + mq =3D &mbox->async_msg_queue; + mq->cons_idx =3D MBOX_MQ_CI_GET(val, ASYNC); + } + + return mbox_prepare_dma_entry(mbox, mq, dma_msg, msg, msg_len); +} + +static void clear_mbox_status(struct hinic3_send_mbox *mbox) +{ + __be64 *wb_status =3D mbox->wb_vaddr; + + *wb_status =3D 0; + /* clear mailbox write back status */ + wmb(); +} + +static void mbox_dword_write(const void *src, void __iomem *dst, u32 count) +{ + u32 __iomem *dst32 =3D dst; + const u32 *src32 =3D src; + u32 i; + + /* Data written to mbox is arranged in structs with little endian fields + * but when written to HW every dword (32bits) should be swapped since + * the HW will swap it again. + */ + for (i =3D 0; i < count; i++) + __raw_writel(swab32(src32[i]), dst32 + i); +} + +static void mbox_copy_header(struct hinic3_hwdev *hwdev, + struct hinic3_send_mbox *mbox, u64 *header) +{ + mbox_dword_write(header, mbox->data, MBOX_HEADER_SZ / sizeof(u32)); +} + +static void mbox_copy_send_data(struct hinic3_hwdev *hwdev, + struct hinic3_send_mbox *mbox, void *seg, + u32 seg_len) +{ + u32 __iomem *dst =3D (u32 __iomem *)(mbox->data + MBOX_HEADER_SZ); + u32 count, leftover, last_dword; + const u32 *src =3D seg; + + count =3D seg_len / sizeof(u32); + leftover =3D seg_len % sizeof(u32); + if (count > 0) + mbox_dword_write(src, dst, count); + + if (leftover > 0) { + last_dword =3D 0; + memcpy(&last_dword, src + count, leftover); + mbox_dword_write(&last_dword, dst + count, 1); + } +} + +static void write_mbox_msg_attr(struct hinic3_mbox *mbox, + u16 dst_func, u16 dst_aeqn, u32 seg_len) +{ + struct hinic3_hwif *hwif =3D mbox->hwdev->hwif; + u32 mbox_int, mbox_ctrl, tx_size; + + tx_size =3D ALIGN(seg_len + MBOX_HEADER_SZ, MBOX_SEG_LEN_ALIGN) >> 2; + + mbox_int =3D MBOX_INT_SET(dst_aeqn, DST_AEQN) | + MBOX_INT_SET(0, STAT_DMA) | + MBOX_INT_SET(tx_size, TX_SIZE) | + MBOX_INT_SET(0, STAT_DMA_SO_RO) | + MBOX_INT_SET(1, WB_EN); + + mbox_ctrl =3D MBOX_CTRL_SET(1, TX_STATUS) | + MBOX_CTRL_SET(0, TRIGGER_AEQE) | + MBOX_CTRL_SET(dst_func, DST_FUNC); + + hinic3_hwif_write_reg(hwif, HINIC3_FUNC_CSR_MAILBOX_INT_OFF, mbox_int); + hinic3_hwif_write_reg(hwif, HINIC3_FUNC_CSR_MAILBOX_CONTROL_OFF, + mbox_ctrl); +} + +static u16 get_mbox_status(const struct hinic3_send_mbox *mbox) +{ + __be64 *wb_status =3D mbox->wb_vaddr; + u64 wb_val; + + wb_val =3D be64_to_cpu(*wb_status); + /* verify reading before check */ + rmb(); + + return wb_val & MBOX_WB_STATUS_ERRCODE_MASK; +} + +static enum hinic3_wait_return check_mbox_wb_status(void *priv_data) +{ + struct hinic3_mbox *mbox =3D priv_data; + u16 wb_status; + + wb_status =3D get_mbox_status(&mbox->send_mbox); + + return MBOX_STATUS_FINISHED(wb_status) ? + HINIC3_WAIT_PROCESS_CPL : HINIC3_WAIT_PROCESS_WAITING; +} + +static int send_mbox_seg(struct hinic3_mbox *mbox, u64 header, + u16 dst_func, void *seg, u32 seg_len, void *msg_info) +{ + struct hinic3_send_mbox *send_mbox =3D &mbox->send_mbox; + struct hinic3_hwdev *hwdev =3D mbox->hwdev; + u8 num_aeqs =3D hwdev->hwif->attr.num_aeqs; + enum mbox_msg_direction_type dir; + u16 dst_aeqn, wb_status, errcode; + int err; + + /* mbox to mgmt cpu, hardware doesn't care about dst aeq id */ + if (num_aeqs > MBOX_MSG_AEQ_FOR_MBOX) { + dir =3D MBOX_MSG_HEADER_GET(header, DIRECTION); + dst_aeqn =3D (dir =3D=3D MBOX_MSG_SEND) ? + MBOX_MSG_AEQ_FOR_EVENT : MBOX_MSG_AEQ_FOR_MBOX; + } else { + dst_aeqn =3D 0; + } + + clear_mbox_status(send_mbox); + mbox_copy_header(hwdev, send_mbox, &header); + mbox_copy_send_data(hwdev, send_mbox, seg, seg_len); + write_mbox_msg_attr(mbox, dst_func, dst_aeqn, seg_len); + + err =3D hinic3_wait_for_timeout(mbox, check_mbox_wb_status, + MBOX_MSG_POLLING_TIMEOUT_MS, + USEC_PER_MSEC); + wb_status =3D get_mbox_status(send_mbox); + if (err) { + dev_err(hwdev->dev, "Send mailbox segment timeout, wb status: 0x%x\n", + wb_status); + return err; + } + + if (!MBOX_STATUS_SUCCESS(wb_status)) { + dev_err(hwdev->dev, + "Send mailbox segment to function %u error, wb status: 0x%x\n", + dst_func, wb_status); + errcode =3D MBOX_STATUS_ERRCODE(wb_status); + return errcode ? errcode : -EFAULT; + } + + return 0; +} + +static int send_mbox_msg(struct hinic3_mbox *mbox, u8 mod, u16 cmd, + const void *msg, u32 msg_len, u16 dst_func, + enum mbox_msg_direction_type direction, + enum mbox_msg_ack_type ack_type, + struct mbox_msg_info *msg_info) +{ + enum mbox_msg_data_type data_type =3D MBOX_MSG_DATA_INLINE; + struct hinic3_hwdev *hwdev =3D mbox->hwdev; + struct mbox_dma_msg dma_msg; + u32 seg_len =3D MBOX_SEG_LEN; + u64 header =3D 0; + u32 seq_id =3D 0; + u16 rsp_aeq_id; + u8 *msg_seg; + int err =3D 0; + u32 left; + + if (hwdev->hwif->attr.num_aeqs > MBOX_MSG_AEQ_FOR_MBOX) + rsp_aeq_id =3D MBOX_MSG_AEQ_FOR_MBOX; + else + rsp_aeq_id =3D 0; + + if (dst_func =3D=3D MBOX_MGMT_FUNC_ID && + !(hwdev->features[0] & MBOX_COMM_F_MBOX_SEGMENT)) { + err =3D mbox_prepare_dma_msg(mbox, ack_type, &dma_msg, + msg, msg_len); + if (err) + goto err_send; + + msg =3D &dma_msg; + msg_len =3D sizeof(dma_msg); + data_type =3D MBOX_MSG_DATA_DMA; + } + + msg_seg =3D (u8 *)msg; + left =3D msg_len; + + header =3D MBOX_MSG_HEADER_SET(msg_len, MSG_LEN) | + MBOX_MSG_HEADER_SET(mod, MODULE) | + MBOX_MSG_HEADER_SET(seg_len, SEG_LEN) | + MBOX_MSG_HEADER_SET(ack_type, NO_ACK) | + MBOX_MSG_HEADER_SET(data_type, DATA_TYPE) | + MBOX_MSG_HEADER_SET(MBOX_SEQ_ID_START_VAL, SEQID) | + MBOX_MSG_HEADER_SET(direction, DIRECTION) | + MBOX_MSG_HEADER_SET(cmd, CMD) | + MBOX_MSG_HEADER_SET(msg_info->msg_id, MSG_ID) | + MBOX_MSG_HEADER_SET(rsp_aeq_id, AEQ_ID) | + MBOX_MSG_HEADER_SET(MBOX_MSG_FROM_MBOX, SOURCE) | + MBOX_MSG_HEADER_SET(!!msg_info->status, STATUS); + + while (!(MBOX_MSG_HEADER_GET(header, LAST))) { + if (left <=3D MBOX_SEG_LEN) { + header &=3D ~MBOX_MSG_HEADER_SEG_LEN_MASK; + header |=3D MBOX_MSG_HEADER_SET(left, SEG_LEN) | + MBOX_MSG_HEADER_SET(1, LAST); + seg_len =3D left; + } + + err =3D send_mbox_seg(mbox, header, dst_func, msg_seg, + seg_len, msg_info); + if (err) { + dev_err(hwdev->dev, "Failed to send mbox seg, seq_id=3D0x%llx\n", + MBOX_MSG_HEADER_GET(header, SEQID)); + goto err_send; + } + + left -=3D MBOX_SEG_LEN; + msg_seg +=3D MBOX_SEG_LEN; + seq_id++; + header &=3D ~MBOX_MSG_HEADER_SEG_LEN_MASK; + header |=3D MBOX_MSG_HEADER_SET(seq_id, SEQID); + } + +err_send: + return err; +} + +static void set_mbox_to_func_event(struct hinic3_mbox *mbox, + enum mbox_event_state event_flag) +{ + spin_lock(&mbox->mbox_lock); + mbox->event_flag =3D event_flag; + spin_unlock(&mbox->mbox_lock); +} + +static enum hinic3_wait_return check_mbox_msg_finish(void *priv_data) +{ + struct hinic3_mbox *mbox =3D priv_data; + + return (mbox->event_flag =3D=3D MBOX_EVENT_SUCCESS) ? + HINIC3_WAIT_PROCESS_CPL : HINIC3_WAIT_PROCESS_WAITING; +} + +static int wait_mbox_msg_completion(struct hinic3_mbox *mbox, + u32 timeout) +{ + u32 wait_time; + int err; + + wait_time =3D (timeout !=3D 0) ? timeout : MBOX_COMP_POLLING_TIMEOUT_MS; + err =3D hinic3_wait_for_timeout(mbox, check_mbox_msg_finish, + wait_time, USEC_PER_MSEC); + if (err) { + set_mbox_to_func_event(mbox, MBOX_EVENT_TIMEOUT); + return err; + } + set_mbox_to_func_event(mbox, MBOX_EVENT_END); + + return 0; +} + int hinic3_send_mbox_to_mgmt(struct hinic3_hwdev *hwdev, u8 mod, u16 cmd, const struct mgmt_msg_params *msg_params) { - /* Completed by later submission due to LoC limit. */ - return -EFAULT; + struct hinic3_mbox *mbox =3D hwdev->mbox; + struct mbox_msg_info msg_info =3D {}; + struct hinic3_msg_desc *msg_desc; + int err; + + /* expect response message */ + msg_desc =3D get_mbox_msg_desc(mbox, MBOX_MSG_RESP, MBOX_MGMT_FUNC_ID); + mutex_lock(&mbox->mbox_send_lock); + msg_info.msg_id =3D (mbox->send_msg_id + 1) & 0xF; + mbox->send_msg_id =3D msg_info.msg_id; + set_mbox_to_func_event(mbox, MBOX_EVENT_START); + + err =3D send_mbox_msg(mbox, mod, cmd, msg_params->buf_in, + msg_params->in_size, MBOX_MGMT_FUNC_ID, + MBOX_MSG_SEND, MBOX_MSG_ACK, &msg_info); + if (err) { + dev_err(hwdev->dev, "Send mailbox mod %u, cmd %u failed, msg_id: %u, err= : %d\n", + mod, cmd, msg_info.msg_id, err); + set_mbox_to_func_event(mbox, MBOX_EVENT_FAIL); + goto err_send; + } + + if (wait_mbox_msg_completion(mbox, msg_params->timeout_ms)) { + dev_err(hwdev->dev, + "Send mbox msg timeout, msg_id: %u\n", msg_info.msg_id); + err =3D -ETIMEDOUT; + goto err_send; + } + + if (mod !=3D msg_desc->mod || cmd !=3D msg_desc->cmd) { + dev_err(hwdev->dev, + "Invalid response mbox message, mod: 0x%x, cmd: 0x%x, expect mod: 0x%x,= cmd: 0x%x\n", + msg_desc->mod, msg_desc->cmd, mod, cmd); + err =3D -EFAULT; + goto err_send; + } + + if (msg_desc->msg_info.status) { + err =3D msg_desc->msg_info.status; + goto err_send; + } + + if (msg_params->buf_out) { + if (msg_desc->msg_len !=3D msg_params->expected_out_size) { + dev_err(hwdev->dev, + "Invalid response mbox message length: %u for mod %d cmd %u, expected = length: %u\n", + msg_desc->msg_len, mod, cmd, + msg_params->expected_out_size); + err =3D -EFAULT; + goto err_send; + } + + memcpy(msg_params->buf_out, msg_desc->msg, msg_desc->msg_len); + } + +err_send: + mutex_unlock(&mbox->mbox_send_lock); + + return err; +} + +int hinic3_send_mbox_to_mgmt_no_ack(struct hinic3_hwdev *hwdev, u8 mod, u1= 6 cmd, + const struct mgmt_msg_params *msg_params) +{ + struct hinic3_mbox *mbox =3D hwdev->mbox; + struct mbox_msg_info msg_info =3D {}; + int err; + + mutex_lock(&mbox->mbox_send_lock); + err =3D send_mbox_msg(mbox, mod, cmd, msg_params->buf_in, + msg_params->in_size, MBOX_MGMT_FUNC_ID, + MBOX_MSG_SEND, MBOX_MSG_NO_ACK, &msg_info); + if (err) + dev_err(hwdev->dev, "Send mailbox no ack failed\n"); + + mutex_unlock(&mbox->mbox_send_lock); + + return err; } diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_mbox.h b/drivers/net= /ethernet/huawei/hinic3/hinic3_mbox.h index abf1b911ad3a..5f19a30236c8 100644 --- a/drivers/net/ethernet/huawei/hinic3/hinic3_mbox.h +++ b/drivers/net/ethernet/huawei/hinic3/hinic3_mbox.h @@ -38,6 +38,26 @@ enum mbox_msg_direction_type { MBOX_MSG_RESP =3D 1, }; =20 +/* Indicates if mbox message expects a response (ack) or not */ +enum mbox_msg_ack_type { + MBOX_MSG_ACK =3D 0, + MBOX_MSG_NO_ACK =3D 1, +}; + +enum mbox_msg_data_type { + MBOX_MSG_DATA_INLINE =3D 0, + MBOX_MSG_DATA_DMA =3D 1, +}; + +enum mbox_msg_src_type { + MBOX_MSG_FROM_MBOX =3D 1, +}; + +enum mbox_msg_aeq_type { + MBOX_MSG_AEQ_FOR_EVENT =3D 0, + MBOX_MSG_AEQ_FOR_MBOX =3D 1, +}; + #define HINIC3_MBOX_WQ_NAME "hinic3_mbox" =20 struct mbox_msg_info { @@ -114,5 +134,7 @@ void hinic3_free_mbox(struct hinic3_hwdev *hwdev); =20 int hinic3_send_mbox_to_mgmt(struct hinic3_hwdev *hwdev, u8 mod, u16 cmd, const struct mgmt_msg_params *msg_params); +int hinic3_send_mbox_to_mgmt_no_ack(struct hinic3_hwdev *hwdev, u8 mod, u1= 6 cmd, + const struct mgmt_msg_params *msg_params); =20 #endif diff --git a/drivers/net/ethernet/huawei/hinic3/hinic3_queue_common.h b/dri= vers/net/ethernet/huawei/hinic3/hinic3_queue_common.h index ec4cae0a0929..2bf7a70251bb 100644 --- a/drivers/net/ethernet/huawei/hinic3/hinic3_queue_common.h +++ b/drivers/net/ethernet/huawei/hinic3/hinic3_queue_common.h @@ -48,6 +48,7 @@ static inline void *get_q_element(const struct hinic3_que= ue_pages *qpages, *remaining_in_page =3D elem_per_pg - elem_idx; ofs =3D elem_idx << qpages->elem_size_shift; page =3D qpages->pages + page_idx; + return (char *)page->align_vaddr + ofs; } =20 --=20 2.43.0