[PATCH v2 1/2] mailbox: add async request mechanism to empower controllers w/ hw queues

Tudor Ambarus posted 2 patches 1 month, 1 week ago
[PATCH v2 1/2] mailbox: add async request mechanism to empower controllers w/ hw queues
Posted by Tudor Ambarus 1 month, 1 week ago
Current form of the mailbox framework doesn't allow controllers to benefit
of their hardware queue capabilities as the framework handles a single
active request at a time.

The active request is considered completed when TX completes. But it seems
that TX is not in direct relation with RX, so a client can't know to which
TX data the RX data corresponds to. Let's consider a client sends
TX1 data, mbox->ops->send_data() is called, timer is started immediately,
last_tx_done() returns true and the client is notified that TX1 completed.
Client sends TX2 and gets notified that TX2 completed. RX comes,
mbox_chan_received_data(chan, mssg) is called after both TX1 and TX2
completed. Client can't know if the received mssg belongs to TX1 or TX2.

In order to address these shortcomings, add a simple async mechanism
based on requests. A request will contain pointers to tx and rx (if any)
data, along with a pointer to a completion struct. Is the responsibility
of the client to allocate and fill the request:

static int client_send_request(struct demo_client *dc, void *data,
			       size_t data_len)
{
	DECLARE_MBOX_WAIT(wait);
	struct mbox_request req;
	int ret;

	req.wait = &wait;
	req.tx = data;
	req.txlen = data_len;
	/*
	 * Set req.rx = NULL if no response is expected. Here we
	 * use the same memory to get the response.
	 */
	req.rx = data;
	req.rxlen = data_len;

	ret = mbox_send_request(dc->mbox_chan, &req);
	ret = mbox_wait_request(ret, &wait);
	if (ret)
		dev_err(dc->dev, "%s failed %d\n", __func__, ret);
	return ret;
}

mbox_send_request() sends a message on the bus. The call may be in atomic
context. Returns -EINPROGRESS if data is fed into hardware, -ENOSPC when
the hardware queue is full, or zero when the request completes.
The message (queue) handling is thus deferred to the controller.

Similar mechanism is used in the crypto subsystem.

The async req mechanism is mutual exclusive with the current message
software queue handling. In the future the software queue handling can
be used as an opt-in backlog choice for users that need it. But we'll
have to do the conversion from ``void *message`` to
``struct mbox_request *req`` throughout the API.

Signed-off-by: Tudor Ambarus <tudor.ambarus@linaro.org>
---
 drivers/mailbox/mailbox.c          | 127 +++++++++++++++++++++++------
 include/linux/mailbox_client.h     |   4 +
 include/linux/mailbox_controller.h |   7 ++
 include/linux/mailbox_request.h    |  33 ++++++++
 4 files changed, 148 insertions(+), 23 deletions(-)
 create mode 100644 include/linux/mailbox_request.h

diff --git a/drivers/mailbox/mailbox.c b/drivers/mailbox/mailbox.c
index d3d26a2c9895..4fe905710458 100644
--- a/drivers/mailbox/mailbox.c
+++ b/drivers/mailbox/mailbox.c
@@ -158,6 +158,11 @@ static enum hrtimer_restart txdone_hrtimer(struct hrtimer *hrtimer)
  */
 void mbox_chan_received_data(struct mbox_chan *chan, void *mssg)
 {
+	if (chan->mbox->ops->send_request) {
+		dev_dbg(chan->mbox->dev, "Operation not supported for mailbox requests\n");
+		return;
+	}
+
 	/* No buffering the received data */
 	if (chan->cl->rx_callback)
 		chan->cl->rx_callback(chan->cl, mssg);
@@ -176,6 +181,11 @@ EXPORT_SYMBOL_GPL(mbox_chan_received_data);
  */
 void mbox_chan_txdone(struct mbox_chan *chan, int r)
 {
+	if (chan->mbox->ops->send_request) {
+		dev_dbg(chan->mbox->dev, "Operation not supported for mailbox requests\n");
+		return;
+	}
+
 	if (unlikely(!(chan->txdone_method & TXDONE_BY_IRQ))) {
 		dev_err(chan->mbox->dev,
 		       "Controller can't run the TX ticker\n");
@@ -197,6 +207,11 @@ EXPORT_SYMBOL_GPL(mbox_chan_txdone);
  */
 void mbox_client_txdone(struct mbox_chan *chan, int r)
 {
+	if (chan->mbox->ops->send_request) {
+		dev_dbg(chan->mbox->dev, "Operation not supported for mailbox requests\n");
+		return;
+	}
+
 	if (unlikely(!(chan->txdone_method & TXDONE_BY_ACK))) {
 		dev_err(chan->mbox->dev, "Client can't run the TX ticker\n");
 		return;
@@ -261,6 +276,11 @@ int mbox_send_message(struct mbox_chan *chan, void *mssg)
 	if (!chan || !chan->cl)
 		return -EINVAL;
 
+	if (chan->mbox->ops->send_request) {
+		dev_dbg(chan->mbox->dev, "Operation not supported for mailbox requests\n");
+		return -EOPNOTSUPP;
+	}
+
 	t = add_to_rbuf(chan, mssg);
 	if (t < 0) {
 		dev_err(chan->mbox->dev, "Try increasing MBOX_TX_QUEUE_LEN\n");
@@ -289,6 +309,39 @@ int mbox_send_message(struct mbox_chan *chan, void *mssg)
 }
 EXPORT_SYMBOL_GPL(mbox_send_message);
 
+int mbox_send_request(struct mbox_chan *chan, struct mbox_request *req)
+{
+	return chan->mbox->ops->send_request(chan, req);
+}
+EXPORT_SYMBOL_GPL(mbox_send_request);
+
+int mbox_wait_request(int err, struct mbox_wait *wait)
+{
+	switch (err) {
+	case -EINPROGRESS:
+		wait_for_completion(&wait->completion);
+		reinit_completion(&wait->completion);
+		err = wait->err;
+		break;
+	}
+
+	return err;
+}
+EXPORT_SYMBOL_GPL(mbox_wait_request);
+
+void mbox_request_complete(struct mbox_request *req, int err)
+{
+	struct mbox_wait *wait;
+
+	if (err == -EINPROGRESS)
+		return;
+
+	wait = req->wait;
+	wait->err = err;
+	complete(&wait->completion);
+}
+EXPORT_SYMBOL_GPL(mbox_request_complete);
+
 /**
  * mbox_flush - flush a mailbox channel
  * @chan: mailbox channel to flush
@@ -311,24 +364,44 @@ int mbox_flush(struct mbox_chan *chan, unsigned long timeout)
 		return -ENOTSUPP;
 
 	ret = chan->mbox->ops->flush(chan, timeout);
-	if (ret < 0)
+	if (ret < 0 && !chan->mbox->ops->send_request)
 		tx_tick(chan, ret);
 
 	return ret;
 }
 EXPORT_SYMBOL_GPL(mbox_flush);
 
+static int mbox_chan_startup(struct mbox_chan *chan, struct mbox_client *cl)
+{
+	struct mbox_controller *mbox = chan->mbox;
+	struct device *dev = cl->dev;
+	int ret;
+
+	if (!mbox->ops->startup)
+		return 0;
+
+	ret = mbox->ops->startup(chan);
+	if (ret) {
+		dev_err(dev, "Unable to startup the chan (%d)\n", ret);
+		mbox_free_channel(chan);
+	}
+
+	return ret;
+}
+
 static int __mbox_bind_client(struct mbox_chan *chan, struct mbox_client *cl)
 {
 	struct device *dev = cl->dev;
 	unsigned long flags;
-	int ret;
 
 	if (chan->cl || !try_module_get(chan->mbox->dev->driver->owner)) {
 		dev_dbg(dev, "%s: mailbox not free\n", __func__);
 		return -EBUSY;
 	}
 
+	if (chan->mbox->ops->send_request)
+		return mbox_chan_startup(chan, cl);
+
 	spin_lock_irqsave(&chan->lock, flags);
 	chan->msg_free = 0;
 	chan->msg_count = 0;
@@ -341,17 +414,7 @@ static int __mbox_bind_client(struct mbox_chan *chan, struct mbox_client *cl)
 
 	spin_unlock_irqrestore(&chan->lock, flags);
 
-	if (chan->mbox->ops->startup) {
-		ret = chan->mbox->ops->startup(chan);
-
-		if (ret) {
-			dev_err(dev, "Unable to startup the chan (%d)\n", ret);
-			mbox_free_channel(chan);
-			return ret;
-		}
-	}
-
-	return 0;
+	return mbox_chan_startup(chan, cl);
 }
 
 /**
@@ -474,13 +537,18 @@ EXPORT_SYMBOL_GPL(mbox_request_channel_byname);
  */
 void mbox_free_channel(struct mbox_chan *chan)
 {
+	struct mbox_controller *mbox;
 	unsigned long flags;
 
 	if (!chan || !chan->cl)
 		return;
 
-	if (chan->mbox->ops->shutdown)
-		chan->mbox->ops->shutdown(chan);
+	mbox = chan->mbox;
+	if (mbox->ops->shutdown)
+		mbox->ops->shutdown(chan);
+
+	if (mbox->ops->send_request)
+		return module_put(mbox->dev->driver->owner);
 
 	/* The queued TX requests are simply aborted, no callbacks are made */
 	spin_lock_irqsave(&chan->lock, flags);
@@ -489,7 +557,7 @@ void mbox_free_channel(struct mbox_chan *chan)
 	if (chan->txdone_method == TXDONE_BY_ACK)
 		chan->txdone_method = TXDONE_BY_POLL;
 
-	module_put(chan->mbox->dev->driver->owner);
+	module_put(mbox->dev->driver->owner);
 	spin_unlock_irqrestore(&chan->lock, flags);
 }
 EXPORT_SYMBOL_GPL(mbox_free_channel);
@@ -506,6 +574,13 @@ of_mbox_index_xlate(struct mbox_controller *mbox,
 	return &mbox->chans[ind];
 }
 
+static void mbox_controller_add_tail(struct mbox_controller *mbox)
+{
+	mutex_lock(&con_mutex);
+	list_add_tail(&mbox->node, &mbox_cons);
+	mutex_unlock(&con_mutex);
+}
+
 /**
  * mbox_controller_register - Register the mailbox controller
  * @mbox:	Pointer to the mailbox controller.
@@ -520,6 +595,17 @@ int mbox_controller_register(struct mbox_controller *mbox)
 	if (!mbox || !mbox->dev || !mbox->ops || !mbox->num_chans)
 		return -EINVAL;
 
+	if (mbox->ops->send_request && mbox->ops->send_data)
+		return -EINVAL;
+
+	if (!mbox->of_xlate)
+		mbox->of_xlate = of_mbox_index_xlate;
+
+	if (mbox->ops->send_request) {
+		mbox_controller_add_tail(mbox);
+		return 0;
+	}
+
 	if (mbox->txdone_irq)
 		txdone = TXDONE_BY_IRQ;
 	else if (mbox->txdone_poll)
@@ -549,12 +635,7 @@ int mbox_controller_register(struct mbox_controller *mbox)
 		spin_lock_init(&chan->lock);
 	}
 
-	if (!mbox->of_xlate)
-		mbox->of_xlate = of_mbox_index_xlate;
-
-	mutex_lock(&con_mutex);
-	list_add_tail(&mbox->node, &mbox_cons);
-	mutex_unlock(&con_mutex);
+	mbox_controller_add_tail(mbox);
 
 	return 0;
 }
@@ -578,7 +659,7 @@ void mbox_controller_unregister(struct mbox_controller *mbox)
 	for (i = 0; i < mbox->num_chans; i++)
 		mbox_free_channel(&mbox->chans[i]);
 
-	if (mbox->txdone_poll)
+	if (!mbox->ops->send_request && mbox->txdone_poll)
 		hrtimer_cancel(&mbox->poll_hrt);
 
 	mutex_unlock(&con_mutex);
diff --git a/include/linux/mailbox_client.h b/include/linux/mailbox_client.h
index 734694912ef7..2eb951fdee0b 100644
--- a/include/linux/mailbox_client.h
+++ b/include/linux/mailbox_client.h
@@ -9,6 +9,7 @@
 
 #include <linux/of.h>
 #include <linux/device.h>
+#include <linux/mailbox_request.h>
 
 struct mbox_chan;
 
@@ -47,4 +48,7 @@ void mbox_client_txdone(struct mbox_chan *chan, int r); /* atomic */
 bool mbox_client_peek_data(struct mbox_chan *chan); /* atomic */
 void mbox_free_channel(struct mbox_chan *chan); /* may sleep */
 
+int mbox_send_request(struct mbox_chan *chan, struct mbox_request *req);
+int mbox_wait_request(int err, struct mbox_wait *wait);
+
 #endif /* __MAILBOX_CLIENT_H */
diff --git a/include/linux/mailbox_controller.h b/include/linux/mailbox_controller.h
index 6fee33cb52f5..0582964b10a0 100644
--- a/include/linux/mailbox_controller.h
+++ b/include/linux/mailbox_controller.h
@@ -8,6 +8,7 @@
 #include <linux/hrtimer.h>
 #include <linux/device.h>
 #include <linux/completion.h>
+#include <linux/mailbox_request.h>
 
 struct mbox_chan;
 
@@ -20,6 +21,10 @@ struct mbox_chan;
  *		transmission of data is reported by the controller via
  *		mbox_chan_txdone (if it has some TX ACK irq). It must not
  *		sleep.
+ * @send_request: The API asks the MBOX controller driver to transmit a message
+ *                on the bus. The call may be in atomic context. Returns
+ *                -EINPROGRESS if data is fed into hardware, -ENOSPC when the
+ *                hardware queue is full, or zero when the request completes.
  * @flush:	Called when a client requests transmissions to be blocking but
  *		the context doesn't allow sleeping. Typically the controller
  *		will implement a busy loop waiting for the data to flush out.
@@ -45,6 +50,7 @@ struct mbox_chan;
  */
 struct mbox_chan_ops {
 	int (*send_data)(struct mbox_chan *chan, void *data);
+	int (*send_request)(struct mbox_chan *chan, struct mbox_request *req);
 	int (*flush)(struct mbox_chan *chan, unsigned long timeout);
 	int (*startup)(struct mbox_chan *chan);
 	void (*shutdown)(struct mbox_chan *chan);
@@ -131,6 +137,7 @@ int mbox_controller_register(struct mbox_controller *mbox); /* can sleep */
 void mbox_controller_unregister(struct mbox_controller *mbox); /* can sleep */
 void mbox_chan_received_data(struct mbox_chan *chan, void *data); /* atomic */
 void mbox_chan_txdone(struct mbox_chan *chan, int r); /* atomic */
+void mbox_request_complete(struct mbox_request *req, int err); /*can sleep */
 
 int devm_mbox_controller_register(struct device *dev,
 				  struct mbox_controller *mbox);
diff --git a/include/linux/mailbox_request.h b/include/linux/mailbox_request.h
new file mode 100644
index 000000000000..69e897e7d3a4
--- /dev/null
+++ b/include/linux/mailbox_request.h
@@ -0,0 +1,33 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright 2024 Linaro Ltd.
+ */
+
+#ifndef __MAILBOX_REQUEST_H
+#define __MAILBOX_REQUEST_H
+
+#include <linux/bits.h>
+#include <linux/types.h>
+#include <linux/completion.h>
+
+struct mbox_wait {
+	struct completion completion;
+	int err;
+};
+
+#define DECLARE_MBOX_WAIT(_wait) \
+	struct mbox_wait _wait = { \
+		COMPLETION_INITIALIZER_ONSTACK((_wait).completion), 0 }
+
+#define MBOX_REQ_MAY_SLEEP	BIT(0)
+
+struct mbox_request {
+	struct mbox_wait *wait;
+	void *tx;
+	void *rx;
+	unsigned int txlen;
+	unsigned int rxlen;
+	u32 flags;
+};
+
+#endif /* __MAILBOX_H */
-- 
2.47.0.rc1.288.g06298d1525-goog
Re: [PATCH v2 1/2] mailbox: add async request mechanism to empower controllers w/ hw queues
Posted by Jassi Brar 1 month, 1 week ago
On Thu, Oct 17, 2024 at 11:36 AM Tudor Ambarus <tudor.ambarus@linaro.org> wrote:
>
> Current form of the mailbox framework doesn't allow controllers to benefit
> of their hardware queue capabilities as the framework handles a single
> active request at a time.
>
> The active request is considered completed when TX completes. But it seems
> that TX is not in direct relation with RX,
>
Correct, and it is not meant to be.
You are assuming there is always an RX in response to a TX, which is
not the case. Many platforms just send a message and only need to know
when it is sent. Many platforms only listen for incoming messages.
Many platforms have TX and RX but not as parts of one exchange. In
fact, only minority of platforms expect RX after each TX. Btw, what if
some platform sends only and always after each receive? For these
reasons, it is left to the user to tie an incoming RX to some previous
TX, or not.

Regards.
Jassi
Re: [PATCH v2 1/2] mailbox: add async request mechanism to empower controllers w/ hw queues
Posted by Tudor Ambarus 1 month, 1 week ago
Hi, Jassi,

Thanks for the review!

On 10/18/24 5:17 AM, Jassi Brar wrote:
> On Thu, Oct 17, 2024 at 11:36 AM Tudor Ambarus <tudor.ambarus@linaro.org> wrote:
>>
>> Current form of the mailbox framework doesn't allow controllers to benefit
>> of their hardware queue capabilities as the framework handles a single
>> active request at a time.
>>
>> The active request is considered completed when TX completes. But it seems
>> that TX is not in direct relation with RX,
>>
> Correct, and it is not meant to be.
> You are assuming there is always an RX in response to a TX, which is

Not really. If there's no response expected, clients can set req->rx to
NULL. Then the controllers know that no response is expected and can
complete the request when TX completes.

> not the case. Many platforms just send a message and only need to know
> when it is sent. Many platforms only listen for incoming messages.

these 2 cases are covered with the req approach.

> Many platforms have TX and RX but not as parts of one exchange. In

I don't think I understand this case. Is it related to what you describe
below?

> fact, only minority of platforms expect RX after each TX. Btw, what if

Right, I noticed.

> some platform sends only and always after each receive? For these

This case is covered as well with the req approach. One just needs to
serialize the requests:

ret = mbox_send_request(dc->mbox_chan, req1);
ret = mbox_wait_request(ret, req1->wait);
if (ret)
	return ret;

// req1 completed, send req2
ret = mbox_send_request(dc->mbox_chan, req2);
ret = mbox_wait_request(ret, req2->wait);
if (ret)
	return ret;
	
This shall work regardless if the client expects a response or not. If
no response is expected, but just a TX completion, then the client can
set req->rx = NULL.

> reasons, it is left to the user to tie an incoming RX to some previous
> TX, or not.

It's possible I haven't covered all the cases, but I'm willing to
understand them and come up with a new version with your help, where I
address all the concerns.

Cheers,
ta
Re: [PATCH v2 1/2] mailbox: add async request mechanism to empower controllers w/ hw queues
Posted by Tudor Ambarus 1 month ago
Hi, Jassi,

On 10/18/24 8:49 AM, Tudor Ambarus wrote:
>>> The active request is considered completed when TX completes. But it seems
>>> that TX is not in direct relation with RX,
>>>
>> Correct, and it is not meant to be.
>> You are assuming there is always an RX in response to a TX, which is
> Not really. If there's no response expected, clients can set req->rx to
> NULL. Then the controllers know that no response is expected and can
> complete the request when TX completes.
> 
>> not the case. Many platforms just send a message and only need to know
>> when it is sent. Many platforms only listen for incoming messages.
> these 2 cases are covered with the req approach.
> 
>> Many platforms have TX and RX but not as parts of one exchange. In
> I don't think I understand this case. Is it related to what you describe
> below?
> 
>> fact, only minority of platforms expect RX after each TX. Btw, what if
> Right, I noticed.
> 
>> some platform sends only and always after each receive? For these
> This case is covered as well with the req approach. One just needs to
> serialize the requests:
> 
> ret = mbox_send_request(dc->mbox_chan, req1);
> ret = mbox_wait_request(ret, req1->wait);
> if (ret)
> 	return ret;
> 
> // req1 completed, send req2
> ret = mbox_send_request(dc->mbox_chan, req2);
> ret = mbox_wait_request(ret, req2->wait);
> if (ret)
> 	return ret;
> 	
> This shall work regardless if the client expects a response or not. If
> no response is expected, but just a TX completion, then the client can
> set req->rx = NULL.
> 
>> reasons, it is left to the user to tie an incoming RX to some previous
>> TX, or not.

Is there a specific driver that I can look at in order to understand the
case where RX is not tied to TX? It will speed me up a little.
Also, if you think there's a better way to enable controllers to manage
their hardware queues, please say.

Thanks,
ta
Re: [PATCH v2 1/2] mailbox: add async request mechanism to empower controllers w/ hw queues
Posted by Jassi Brar 1 month ago
> On 10/18/24 8:49 AM, Tudor Ambarus wrote:

> The active request is considered completed when TX completes. But it seems
> that TX is not in direct relation with RX,

TX and RX are assumed independent operations (which they are).
TX is sending a message/request to the remote successfully. 'Success'
can be indicated by any of the three methods  TXDONE_BY_{POLLING, IRQ,
ACK}.
You seem to assume it should always be an ACK where we receive an
acknowledgment/response packet, which is not the case.

...

>> Correct, and it is not meant to be.
>> You are assuming there is always an RX in response to a TX, which is

> Not really. If there's no response expected, clients can set req->rx to NULL.

You are assuming the default behaviour is that there is a reply
associated with a TX, otherwise "clients can set req->rx to NULL".
This api can be _used_ only for protocols that expect a response for
each TX. For other protocols,  it is simply a passthrough wrapper (by
doing things like req->rx = NULL). For handling this special case of
Tx->Rx, maybe a higher level common helper function would be better.

...

>> reasons, it is left to the user to tie an incoming RX to some previous
>> TX, or not.

> Is there a specific driver that I can look at in order to understand the
> case where RX is not tied to TX?

 Many...
 * The remote end owns sensors and can asynchronously send, say
thermal, notifications to Linux.
 * On some platform the RX may be asynchronous, that is sent later
with some identifying tag of the past TX.
 * Just reverse the roles of local and remote - remote sends us a
request (RX for us) and we are supposed to send a response (TX).

> Also, if you think there's a better way to enable controllers to manage
> their hardware queues, please say.
>
Tying RX to TX has nothing to do with hardware queues. There can be a
hardware queue and the protocol can still be
"local-to-remote-broadcast".

While I don't think we need the "Rx is in relation to some past Tx"
api, I am open to ideas to better utilize h/w queues.

The h/w TX queue/fifo may hold, say, 8 messages while the controller
transmits them to the remote. Currently it is implemented by
.last_tx_done() returning true as long as fifo is not full.
The other option, to have more than one TX in-flight, only complicates
the mailbox api without fetching any real benefits because the
protocol would still need to handle cases like Tx was successful but
the remote rejected the request or the Rx failed on the h/w fifo
(there could be rx-fifo too, right). Is where I am at right now.

Regards,
Jassi
Re: [PATCH v2 1/2] mailbox: add async request mechanism to empower controllers w/ hw queues
Posted by Tudor Ambarus 1 month ago
Hi, Jassi,

On 10/21/24 5:32 PM, Jassi Brar wrote:
>> On 10/18/24 8:49 AM, Tudor Ambarus wrote:
> 
>> The active request is considered completed when TX completes. But it seems
>> that TX is not in direct relation with RX,
> 
> TX and RX are assumed independent operations (which they are).
> TX is sending a message/request to the remote successfully. 'Success'
> can be indicated by any of the three methods  TXDONE_BY_{POLLING, IRQ,
> ACK}.
> You seem to assume it should always be an ACK where we receive an
> acknowledgment/response packet, which is not the case.

My controller driver indeed ties TX to RX and considers the request
completed when the RX completes.

But other drivers can still complete the request at TXDONE if that's
what they need.

> 
> ...
> 
>>> Correct, and it is not meant to be.
>>> You are assuming there is always an RX in response to a TX, which is
> 
>> Not really. If there's no response expected, clients can set req->rx to NULL.
> 
> You are assuming the default behaviour is that there is a reply
> associated with a TX, otherwise "clients can set req->rx to NULL".
> This api can be _used_ only for protocols that expect a response for
> each TX. For other protocols,  it is simply a passthrough wrapper (by
> doing things like req->rx = NULL). For handling this special case of
> Tx->Rx, maybe a higher level common helper function would be better.
> 
> ...
> 
>>> reasons, it is left to the user to tie an incoming RX to some previous
>>> TX, or not.
> 
>> Is there a specific driver that I can look at in order to understand the
>> case where RX is not tied to TX?
> 
>  Many...
>  * The remote end owns sensors and can asynchronously send, say
> thermal, notifications to Linux.
>  * On some platform the RX may be asynchronous, that is sent later
> with some identifying tag of the past TX.
>  * Just reverse the roles of local and remote - remote sends us a
> request (RX for us) and we are supposed to send a response (TX).

I was hoping for a name of a driver, but I guess I can find them out
eventually.
> 
>> Also, if you think there's a better way to enable controllers to manage
>> their hardware queues, please say.
>>
> Tying RX to TX has nothing to do with hardware queues. There can be a

Right, I agree.

> hardware queue and the protocol can still be
> "local-to-remote-broadcast".
> 
> While I don't think we need the "Rx is in relation to some past Tx"
> api, I am open to ideas to better utilize h/w queues.
> 
> The h/w TX queue/fifo may hold, say, 8 messages while the controller
> transmits them to the remote. Currently it is implemented by
> .last_tx_done() returning true as long as fifo is not full.
> The other option, to have more than one TX in-flight, only complicates
> the mailbox api without fetching any real benefits because the
> protocol would still need to handle cases like Tx was successful but
> the remote rejected the request or the Rx failed on the h/w fifo
> (there could be rx-fifo too, right). Is where I am at right now.
> 
No worries, I'm confident we'll reach a conclusion.

It's true I implemented just my use case, but that doesn't mean that the
"request" approach can't be extended for current users.

If we replace throughout the mailbox core `void *message` with
`struct mbox_request *req`, all the users can still do their magic,
isn't it? The only difference would be that instead of having a
completion structure per channel, we have a completion structure per
request.

In order to have more than one TX in-flight, we need that each TX to
have its own completion struct. Then, for my case, if the clients expect
a response for each TX, then it shall be the responsibility of the
client to allocate space for RX. This is exactly what struct
mbox_request does:

+struct mbox_wait {
+	struct completion completion;
+	int err;
+};
+
+#define DECLARE_MBOX_WAIT(_wait) \
+	struct mbox_wait _wait = { \
+		COMPLETION_INITIALIZER_ONSTACK((_wait).completion), 0 }
+
+#define MBOX_REQ_MAY_SLEEP	BIT(0)
+
+struct mbox_request {
+	struct mbox_wait *wait;
+	void *tx;
+	void *rx;
+	unsigned int txlen;
+	unsigned int rxlen;
+	u32 flags;
+};

Also, in order to have more than one TX in-flight, we need to allow
controller drivers to bypass the mailbox core
one-active-request-at-a-time handling. The current software queue
handling mechanism from the mailbox core can be used as a backlog
mechanism: if the controller driver has no space to process a new
request (regardless if it has hardware queue or not), it can fallback to
the backlog mechanism where one request is handled at a time. The use of
the backlog mechanism shall be an opt-in choice.

Now clients that don't care about RX are allowed to not allocate space
for it, and consider the request completed at TX done, if that's what
they need.

What am I missing?

Thanks,
ta
Re: [PATCH v2 1/2] mailbox: add async request mechanism to empower controllers w/ hw queues
Posted by Jassi Brar 1 month ago
Hi Tudor,

On Tue, Oct 22, 2024 at 8:27 AM Tudor Ambarus <tudor.ambarus@linaro.org> wrote:
>
> Hi, Jassi,
>
> On 10/21/24 5:32 PM, Jassi Brar wrote:
> >> On 10/18/24 8:49 AM, Tudor Ambarus wrote:
> >
> >> The active request is considered completed when TX completes. But it seems
> >> that TX is not in direct relation with RX,
> >
> > TX and RX are assumed independent operations (which they are).
> > TX is sending a message/request to the remote successfully. 'Success'
> > can be indicated by any of the three methods  TXDONE_BY_{POLLING, IRQ,
> > ACK}.
> > You seem to assume it should always be an ACK where we receive an
> > acknowledgment/response packet, which is not the case.
>
> My controller driver indeed ties TX to RX and considers the request
> completed when the RX completes.
>
Does your controller require RX or the protocol? I suspect the latter.
Anyway, the former is also supported by TXDONE_BY_ACK already.

> >
> >> Is there a specific driver that I can look at in order to understand the
> >> case where RX is not tied to TX?
> >
> >  Many...
> >  * The remote end owns sensors and can asynchronously send, say
> > thermal, notifications to Linux.
> >  * On some platform the RX may be asynchronous, that is sent later
> > with some identifying tag of the past TX.
> >  * Just reverse the roles of local and remote - remote sends us a
> > request (RX for us) and we are supposed to send a response (TX).
>
> I was hoping for a name of a driver, but I guess I can find them out
> eventually.
>
Do these usecases seem impossible to you? Many users are not upstream
that we care
about as long as we are not making some corrective change.


> >
> >> Also, if you think there's a better way to enable controllers to manage
> >> their hardware queues, please say.
> >>
> > Tying RX to TX has nothing to do with hardware queues. There can be a
>
> Right, I agree.
>
> > hardware queue and the protocol can still be
> > "local-to-remote-broadcast".
> >
> > While I don't think we need the "Rx is in relation to some past Tx"
> > api, I am open to ideas to better utilize h/w queues.
> >
> > The h/w TX queue/fifo may hold, say, 8 messages while the controller
> > transmits them to the remote. Currently it is implemented by
> > .last_tx_done() returning true as long as fifo is not full.
> > The other option, to have more than one TX in-flight, only complicates
> > the mailbox api without fetching any real benefits because the
> > protocol would still need to handle cases like Tx was successful but
> > the remote rejected the request or the Rx failed on the h/w fifo
> > (there could be rx-fifo too, right). Is where I am at right now.
> >
> No worries, I'm confident we'll reach a conclusion.
>
> It's true I implemented just my use case, but that doesn't mean that the
> "request" approach can't be extended for current users.
>
Sorry, I am not sure we should make the api dance around your usecase.
If your usecase is not currently handled, please let me know. We can
discuss that.

Thanks.
Re: [PATCH v2 1/2] mailbox: add async request mechanism to empower controllers w/ hw queues
Posted by Tudor Ambarus 1 month ago

On 10/24/24 2:27 AM, Jassi Brar wrote:
> Hi Tudor,
> 

Hi, Jassi!

I appreciate that you respond quickly on my emails, thanks!

> On Tue, Oct 22, 2024 at 8:27 AM Tudor Ambarus <tudor.ambarus@linaro.org> wrote:
>>
>> Hi, Jassi,
>>
>> On 10/21/24 5:32 PM, Jassi Brar wrote:
>>>> On 10/18/24 8:49 AM, Tudor Ambarus wrote:
>>>
>>>> The active request is considered completed when TX completes. But it seems
>>>> that TX is not in direct relation with RX,
>>>
>>> TX and RX are assumed independent operations (which they are).
>>> TX is sending a message/request to the remote successfully. 'Success'
>>> can be indicated by any of the three methods  TXDONE_BY_{POLLING, IRQ,
>>> ACK}.
>>> You seem to assume it should always be an ACK where we receive an
>>> acknowledgment/response packet, which is not the case.
>>
>> My controller driver indeed ties TX to RX and considers the request
>> completed when the RX completes.
>>
> Does your controller require RX or the protocol? I suspect the latter.

The response from remote always happens for the acpm protocol. Same for
the plain (no acpm or SRAM involved) mailbox controller that I see in
downstream.

While the response from remote always happens, the RX data is optional.
Clients can choose whether they want the data from the RX ring or not
(see `struct exynos_acpm_rx_data`).

For each message that is sent on the TX ring, it is expected that the
remote consumes it. The remote gets the message from the TX queue,
advances the rear index of the TX ring, processes the request, writes
the response message (if any) on the linux RX ring and advances the
front index of the linux RX ring.

> Anyway, the former is also supported by TXDONE_BY_ACK already.

If we want to focus on when TX is done and really be strict about it,
then for my case TX can be considered done when the remote consumes it.
I need to poll and check when the linux TX ring rear index is moved by
the remote. Other option is to poll the IRQ status register of the
remote to see when the request was consumed. So maybe TXDONE_BY_POLL is
more accurate?
TX can also be considered done after what we write to TX ring hits the
endpoint-SRAM, thus neither of these flags needed.

As a side nodeto IRQs, the acpm protocol allows channels to work either
in polling or in IRQ mode. I expect in the future we'll need to specify
the done method per channel and not per controller. IRQs are not used in
the downstream, thus I didn't touch them in this proposal.

> 
>>>
>>>> Is there a specific driver that I can look at in order to understand the
>>>> case where RX is not tied to TX?
>>>
>>>  Many...
>>>  * The remote end owns sensors and can asynchronously send, say
>>> thermal, notifications to Linux.
>>>  * On some platform the RX may be asynchronous, that is sent later
>>> with some identifying tag of the past TX.
>>>  * Just reverse the roles of local and remote - remote sends us a
>>> request (RX for us) and we are supposed to send a response (TX).
>>
>> I was hoping for a name of a driver, but I guess I can find them out
>> eventually.
>>
> Do these usecases seem impossible to you? Many users are not upstream

They seem fine, I just wanted to see the implementation and decide
whether the request approach can be applied to them or not. I think it can.

> that we care
> about as long as we are not making some corrective change.>
> 
>>>
>>>> Also, if you think there's a better way to enable controllers to manage
>>>> their hardware queues, please say.
>>>>
>>> Tying RX to TX has nothing to do with hardware queues. There can be a
>>
>> Right, I agree.
>>
>>> hardware queue and the protocol can still be
>>> "local-to-remote-broadcast".
>>>
>>> While I don't think we need the "Rx is in relation to some past Tx"
>>> api, I am open to ideas to better utilize h/w queues.
>>>
>>> The h/w TX queue/fifo may hold, say, 8 messages while the controller
>>> transmits them to the remote. Currently it is implemented by
>>> .last_tx_done() returning true as long as fifo is not full.
>>> The other option, to have more than one TX in-flight, only complicates
>>> the mailbox api without fetching any real benefits because the
>>> protocol would still need to handle cases like Tx was successful but
>>> the remote rejected the request or the Rx failed on the h/w fifo
>>> (there could be rx-fifo too, right). Is where I am at right now.
>>>
>> No worries, I'm confident we'll reach a conclusion.
>>
>> It's true I implemented just my use case, but that doesn't mean that the
>> "request" approach can't be extended for current users.
>>
> Sorry, I am not sure we should make the api dance around your usecase.

No worries, it's fine to disagree.

> If your usecase is not currently handled, please let me know. We can
> discuss that.

It's not handled. I have a list of requirements I have to fulfill which
are not covered by the mailbox core:

1/ allow multiple TX in-flight. We need to let the controller handle its
hardware queue, otherwise the hardware queue has no sense at all.
2/ allow to tie a TX to a RX. I need to know to what TX the response
corresponds to.
3/ allow multiple clients to the same channel. ACPM allows this. Support
would have come as an extra patch.
4/ allow polling and IRQ channels for the same mailbox controller (not
urgent).

I guess that we agree that in order to allow multiple TX in-flight we
need a completion struct per message/request and not per channel. But we
disagree on the ability to tie a TX to a RX.

How can I move forward with this? No better options come to mind right
now. Shall I take the apple approach and move everything to drivers/soc?
If any of you has another idea, shoot.

Thanks,
ta
Re: [PATCH v2 1/2] mailbox: add async request mechanism to empower controllers w/ hw queues
Posted by Jassi Brar 3 weeks, 6 days ago
On Thu, Oct 24, 2024 at 5:45 AM Tudor Ambarus <tudor.ambarus@linaro.org> wrote:
>
>
>
> On 10/24/24 2:27 AM, Jassi Brar wrote:
> > Hi Tudor,
> >
>
> Hi, Jassi!
>
> I appreciate that you respond quickly on my emails, thanks!
>
> > On Tue, Oct 22, 2024 at 8:27 AM Tudor Ambarus <tudor.ambarus@linaro.org> wrote:
> >>
> >> Hi, Jassi,
> >>
> >> On 10/21/24 5:32 PM, Jassi Brar wrote:
> >>>> On 10/18/24 8:49 AM, Tudor Ambarus wrote:
> >>>
> >>>> The active request is considered completed when TX completes. But it seems
> >>>> that TX is not in direct relation with RX,
> >>>
> >>> TX and RX are assumed independent operations (which they are).
> >>> TX is sending a message/request to the remote successfully. 'Success'
> >>> can be indicated by any of the three methods  TXDONE_BY_{POLLING, IRQ,
> >>> ACK}.
> >>> You seem to assume it should always be an ACK where we receive an
> >>> acknowledgment/response packet, which is not the case.
> >>
> >> My controller driver indeed ties TX to RX and considers the request
> >> completed when the RX completes.
> >>
> > Does your controller require RX or the protocol? I suspect the latter.
>
> The response from remote always happens for the acpm protocol. Same for
> the plain (no acpm or SRAM involved) mailbox controller that I see in
> downstream.
>
> While the response from remote always happens, the RX data is optional.
> Clients can choose whether they want the data from the RX ring or not
> (see `struct exynos_acpm_rx_data`).
>
> For each message that is sent on the TX ring, it is expected that the
> remote consumes it. The remote gets the message from the TX queue,
> advances the rear index of the TX ring, processes the request, writes
> the response message (if any) on the linux RX ring and advances the
> front index of the linux RX ring.
>
> > Anyway, the former is also supported by TXDONE_BY_ACK already.
>
> If we want to focus on when TX is done and really be strict about it,
> then for my case TX can be considered done when the remote consumes it.
> I need to poll and check when the linux TX ring rear index is moved by
> the remote. Other option is to poll the IRQ status register of the
> remote to see when the request was consumed. So maybe TXDONE_BY_POLL is
> more accurate?
> TX can also be considered done after what we write to TX ring hits the
> endpoint-SRAM, thus neither of these flags needed.
>
> As a side nodeto IRQs, the acpm protocol allows channels to work either
> in polling or in IRQ mode. I expect in the future we'll need to specify
> the done method per channel and not per controller. IRQs are not used in
> the downstream, thus I didn't touch them in this proposal.
>
> >
> >>>
> >>>> Is there a specific driver that I can look at in order to understand the
> >>>> case where RX is not tied to TX?
> >>>
> >>>  Many...
> >>>  * The remote end owns sensors and can asynchronously send, say
> >>> thermal, notifications to Linux.
> >>>  * On some platform the RX may be asynchronous, that is sent later
> >>> with some identifying tag of the past TX.
> >>>  * Just reverse the roles of local and remote - remote sends us a
> >>> request (RX for us) and we are supposed to send a response (TX).
> >>
> >> I was hoping for a name of a driver, but I guess I can find them out
> >> eventually.
> >>
> > Do these usecases seem impossible to you? Many users are not upstream
>
> They seem fine, I just wanted to see the implementation and decide
> whether the request approach can be applied to them or not. I think it can.
>
> > that we care
> > about as long as we are not making some corrective change.>
> >
> >>>
> >>>> Also, if you think there's a better way to enable controllers to manage
> >>>> their hardware queues, please say.
> >>>>
> >>> Tying RX to TX has nothing to do with hardware queues. There can be a
> >>
> >> Right, I agree.
> >>
> >>> hardware queue and the protocol can still be
> >>> "local-to-remote-broadcast".
> >>>
> >>> While I don't think we need the "Rx is in relation to some past Tx"
> >>> api, I am open to ideas to better utilize h/w queues.
> >>>
> >>> The h/w TX queue/fifo may hold, say, 8 messages while the controller
> >>> transmits them to the remote. Currently it is implemented by
> >>> .last_tx_done() returning true as long as fifo is not full.
> >>> The other option, to have more than one TX in-flight, only complicates
> >>> the mailbox api without fetching any real benefits because the
> >>> protocol would still need to handle cases like Tx was successful but
> >>> the remote rejected the request or the Rx failed on the h/w fifo
> >>> (there could be rx-fifo too, right). Is where I am at right now.
> >>>
> >> No worries, I'm confident we'll reach a conclusion.
> >>
> >> It's true I implemented just my use case, but that doesn't mean that the
> >> "request" approach can't be extended for current users.
> >>
> > Sorry, I am not sure we should make the api dance around your usecase.
>
> No worries, it's fine to disagree.
>
> > If your usecase is not currently handled, please let me know. We can
> > discuss that.
>
> It's not handled. I have a list of requirements I have to fulfill which
> are not covered by the mailbox core:
>
> 1/ allow multiple TX in-flight. We need to let the controller handle its
> hardware queue, otherwise the hardware queue has no sense at all.
>
As I said this one is currently handled by assuming TX-done by
depositing in the h/w queue/fifo.
You will have the same perf as with your attempt to have "multiple
in-flight" while neither of these approaches handles in-flight
failures. We can discuss this.

> 2/ allow to tie a TX to a RX. I need to know to what TX the response
> corresponds to.
> 3/ allow multiple clients to the same channel. ACPM allows this. Support
> would have come as an extra patch.
>
These are nothing new. A few other platforms too have shared channels
and that is implemented above the mailbox.

> 4/ allow polling and IRQ channels for the same mailbox controller (not
> urgent).
>
It is very easy to populate them as separate controllers.

> I guess that we agree that in order to allow multiple TX in-flight we
> need a completion struct per message/request and not per channel. But we
> disagree on the ability to tie a TX to a RX.
>
> How can I move forward with this?
>
As I already explained, to tie RX to a TX is very protocol specific
and should be done in the layer above the mailbox api.

Thanks.
Re: [PATCH v2 1/2] mailbox: add async request mechanism to empower controllers w/ hw queues
Posted by Tudor Ambarus 1 week ago
Hi, Jassi,

Sorry for the late reply, I was off for a while.

On 10/29/24 3:59 PM, Jassi Brar wrote:
>>> If your usecase is not currently handled, please let me know. We can
>>> discuss that.
>> It's not handled. I have a list of requirements I have to fulfill which
>> are not covered by the mailbox core:
>>
>> 1/ allow multiple TX in-flight. We need to let the controller handle its
>> hardware queue, otherwise the hardware queue has no sense at all.
>>
> As I said this one is currently handled by assuming TX-done by
> depositing in the h/w queue/fifo.

This may work indeed. I would have a TXDONE_BY_ACK controller. Its
`.send_data` would be a one liner, where I just raise the interrupt to
the firmware. TX ring would be written by `cl->tx_prepare()`.

Then in the protocol driver I would do:
ret = mbox_send_message(mbox_chan, msg);
if (ret < 0)
	return ret;

/* TX-done when depositing in the h/w queue. */
mbox_client_txdone(mbox_chan, 0);

ret = exynos_acpm_wait_for_message_response(mbox_chan, msg);
if (ret)
	return ret;

> You will have the same perf as with your attempt to have "multiple

I'm still forced to pass all the messages to the mailbox's core software
queue. I also don't need the locking from the core. For my case the mbox
core just needs to do:

if (chan->cl->tx_prepare)
	chan->cl->tx_prepare(chan->cl, data);

return chan->mbox->ops->send_data(chan, data);

Would it make sense to introduce such a method?

BTW, what are the minimum requirements for a controller to be considered
a mailbox controller? As you saw, I'm using the mailbox controller just
to raise the interrupt to the firmware, no messages are written to the
controller's buffers.

Does message size matter? On my device the ACPM protocol is using
messages of 0, 2, 16 or 32 bytes, but it also allows the possibility to
pass a pointer to an indirection command SRAM buffer, where one is able
to write up to 412 bytes.

> in-flight" while neither of these approaches handles in-flight
> failures. We can discuss this.

Do you mean that there's no retry mechanism? The crypto subsystem has
one, we may look at what's done there if we care.

> 
>> 2/ allow to tie a TX to a RX. I need to know to what TX the response
>> corresponds to.
>> 3/ allow multiple clients to the same channel. ACPM allows this. Support
>> would have come as an extra patch.
>>
> These are nothing new. A few other platforms too have shared channels
> and that is implemented above the mailbox.
> 
okay

>> 4/ allow polling and IRQ channels for the same mailbox controller (not
>> urgent).
>>
> It is very easy to populate them as separate controllers.

Do you mean to call devm_mbox_controller_register() twice for the same dev?

Thanks,
ta