[RFC net-next 00/15] Add support for Silicon Labs CPC

Damien Riégel posted 15 patches 7 months, 1 week ago
.../bindings/net/silabs,cpc-spi.yaml          |  54 ++
MAINTAINERS                                   |   6 +
drivers/net/Kconfig                           |   2 +
drivers/net/Makefile                          |   1 +
drivers/net/cpc/Kconfig                       |  16 +
drivers/net/cpc/Makefile                      |   5 +
drivers/net/cpc/ble.c                         | 147 +++++
drivers/net/cpc/ble.h                         |  14 +
drivers/net/cpc/cpc.h                         | 204 +++++++
drivers/net/cpc/endpoint.c                    | 333 +++++++++++
drivers/net/cpc/header.c                      | 237 ++++++++
drivers/net/cpc/header.h                      |  83 +++
drivers/net/cpc/interface.c                   | 308 ++++++++++
drivers/net/cpc/interface.h                   | 117 ++++
drivers/net/cpc/main.c                        | 163 ++++++
drivers/net/cpc/protocol.c                    | 309 ++++++++++
drivers/net/cpc/protocol.h                    |  27 +
drivers/net/cpc/spi.c                         | 550 ++++++++++++++++++
drivers/net/cpc/spi.h                         |  12 +
drivers/net/cpc/system.c                      | 432 ++++++++++++++
drivers/net/cpc/system.h                      |  14 +
21 files changed, 3034 insertions(+)
create mode 100644 Documentation/devicetree/bindings/net/silabs,cpc-spi.yaml
create mode 100644 drivers/net/cpc/Kconfig
create mode 100644 drivers/net/cpc/Makefile
create mode 100644 drivers/net/cpc/ble.c
create mode 100644 drivers/net/cpc/ble.h
create mode 100644 drivers/net/cpc/cpc.h
create mode 100644 drivers/net/cpc/endpoint.c
create mode 100644 drivers/net/cpc/header.c
create mode 100644 drivers/net/cpc/header.h
create mode 100644 drivers/net/cpc/interface.c
create mode 100644 drivers/net/cpc/interface.h
create mode 100644 drivers/net/cpc/main.c
create mode 100644 drivers/net/cpc/protocol.c
create mode 100644 drivers/net/cpc/protocol.h
create mode 100644 drivers/net/cpc/spi.c
create mode 100644 drivers/net/cpc/spi.h
create mode 100644 drivers/net/cpc/system.c
create mode 100644 drivers/net/cpc/system.h
[RFC net-next 00/15] Add support for Silicon Labs CPC
Posted by Damien Riégel 7 months, 1 week ago
Hi,


This patchset brings initial support for Silicon Labs CPC protocol,
standing for Co-Processor Communication. This protocol is used by the
EFR32 Series [1]. These devices offer a variety for radio protocols,
such as Bluetooth, Z-Wave, Zigbee [2].

Some of the devices support several protocols in one chipset, and the
main raison d'etre for CPC is to facilicate the co-existence of these
protocols by providing each radio stack a dedicated communication
channel over a shared physical link, such as SPI or SDIO.

These separate communication channels are called endpoints and the
protocol provides:
  - reliability by retransmitting unacknowledged packets. This is not
    part of the current patchset
  - ordered delivery
  - resource management, by avoiding sending packets to the radio
    co-processor if it doesn't have the room for receiving them

The current patchset showcases a full example with Bluetooth over SPI.
In the future, other buses should be supported, as well as other radio
protocols.

For the RFC, I've bundled everything together in a big module to avoid
submitting patches to different subsystems, but I expect to get comments
about that and the final version of these series will probably be split
into two or three modules. Please let me know if it makes sense or not:
  - net/cpc for the core implementation of the protocol
  - drivers/bluetooth/ for the bluetooth endpoint
  - optionally, the SPI driver could be separated from the main module
    and moved to drivers/spi

I've tried to split the patchset in digestible atomic commits but as
we're introducing a new protocol the first 12 commits are all needed to
get it to work. The SPI and Bluetooth driver are more standalone and
illustrates how new bus or radio protocols would be added in the future.

[1] https://www.silabs.com/wireless/gecko-series-2
[2] https://www.silabs.com/wireless

Damien Riégel (15):
  net: cpc: add base skeleton driver
  net: cpc: add endpoint infrastructure
  net: cpc: introduce CPC driver and bus
  net: cpc: add protocol header structure and API
  net: cpc: implement basic transmit path
  net: cpc: implement basic receive path
  net: cpc: implement sequencing and ack
  net: cpc: add support for connecting endpoints
  net: cpc: add support for RST frames
  net: cpc: make disconnect blocking
  net: cpc: add system endpoint
  net: cpc: create system endpoint with a new interface
  dt-bindings: net: cpc: add silabs,cpc-spi.yaml
  net: cpc: add SPI interface driver
  net: cpc: add Bluetooth HCI driver

 .../bindings/net/silabs,cpc-spi.yaml          |  54 ++
 MAINTAINERS                                   |   6 +
 drivers/net/Kconfig                           |   2 +
 drivers/net/Makefile                          |   1 +
 drivers/net/cpc/Kconfig                       |  16 +
 drivers/net/cpc/Makefile                      |   5 +
 drivers/net/cpc/ble.c                         | 147 +++++
 drivers/net/cpc/ble.h                         |  14 +
 drivers/net/cpc/cpc.h                         | 204 +++++++
 drivers/net/cpc/endpoint.c                    | 333 +++++++++++
 drivers/net/cpc/header.c                      | 237 ++++++++
 drivers/net/cpc/header.h                      |  83 +++
 drivers/net/cpc/interface.c                   | 308 ++++++++++
 drivers/net/cpc/interface.h                   | 117 ++++
 drivers/net/cpc/main.c                        | 163 ++++++
 drivers/net/cpc/protocol.c                    | 309 ++++++++++
 drivers/net/cpc/protocol.h                    |  27 +
 drivers/net/cpc/spi.c                         | 550 ++++++++++++++++++
 drivers/net/cpc/spi.h                         |  12 +
 drivers/net/cpc/system.c                      | 432 ++++++++++++++
 drivers/net/cpc/system.h                      |  14 +
 21 files changed, 3034 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/net/silabs,cpc-spi.yaml
 create mode 100644 drivers/net/cpc/Kconfig
 create mode 100644 drivers/net/cpc/Makefile
 create mode 100644 drivers/net/cpc/ble.c
 create mode 100644 drivers/net/cpc/ble.h
 create mode 100644 drivers/net/cpc/cpc.h
 create mode 100644 drivers/net/cpc/endpoint.c
 create mode 100644 drivers/net/cpc/header.c
 create mode 100644 drivers/net/cpc/header.h
 create mode 100644 drivers/net/cpc/interface.c
 create mode 100644 drivers/net/cpc/interface.h
 create mode 100644 drivers/net/cpc/main.c
 create mode 100644 drivers/net/cpc/protocol.c
 create mode 100644 drivers/net/cpc/protocol.h
 create mode 100644 drivers/net/cpc/spi.c
 create mode 100644 drivers/net/cpc/spi.h
 create mode 100644 drivers/net/cpc/system.c
 create mode 100644 drivers/net/cpc/system.h

-- 
2.49.0

Re: [RFC net-next 00/15] Add support for Silicon Labs CPC
Posted by Andrew Lunn 7 months, 1 week ago
On Sun, May 11, 2025 at 09:27:33PM -0400, Damien Riégel wrote:
> Hi,
> 
> 
> This patchset brings initial support for Silicon Labs CPC protocol,
> standing for Co-Processor Communication. This protocol is used by the
> EFR32 Series [1]. These devices offer a variety for radio protocols,
> such as Bluetooth, Z-Wave, Zigbee [2].

Before we get too deep into the details of the patches, please could
you do a compare/contrast to Greybus.

The core of Greybus is already in the kernel, with some more bits
being in staging. I don't know it too well, but at first glance it
looks very similar. We should not duplicate that.

Also, this patch adds Bluetooth, you talk about Z-Wave and Zigbee. But
the EFR32 is a general purpose SoC, with I2C, SPI, PWM, UART. Greybus
has support for these, although the code is current in staging. But
for staging code, it is actually pretty good.

Why should we add a vendor implementation when we already appear to
have something which does most of what is needed?

	Andrew
Re: [RFC net-next 00/15] Add support for Silicon Labs CPC
Posted by Damien Riégel 7 months, 1 week ago
On Mon May 12, 2025 at 1:07 PM EDT, Andrew Lunn wrote:
> On Sun, May 11, 2025 at 09:27:33PM -0400, Damien Riégel wrote:
>> Hi,
>>
>>
>> This patchset brings initial support for Silicon Labs CPC protocol,
>> standing for Co-Processor Communication. This protocol is used by the
>> EFR32 Series [1]. These devices offer a variety for radio protocols,
>> such as Bluetooth, Z-Wave, Zigbee [2].
>
> Before we get too deep into the details of the patches, please could
> you do a compare/contrast to Greybus.

Thank you for the prompt feedback on the RFC. We took a look at Greybus
in the past and it didn't seem to fit our needs. One of the main use
case that drove the development of CPC was to support WiFi (in
coexistence with other radio stacks) over SDIO, and get the maximum
throughput possible. We concluded that to achieve this we would need
packet aggregation, as sending one frame at a time over SDIO is
wasteful, and managing Radio Co-Processor available buffers, as sending
frames that the RCP is not able to process would degrade performance.

Greybus don't seem to offer these capabilities. It seems to be more
geared towards implementing RPC, where the host would send a command,
and then wait for the device to execute it and to respond. For Greybus'
protocols that implement some "streaming" features like audio or video
capture, the data streams go to an I2S or CSI interface, but it doesn't
seem to go through a CPort. So it seems to act as a backbone to connect
CPorts together, but high-throughput transfers happen on other types of
links. CPC is more about moving data over a physical link, guaranteeing
ordered delivery and avoiding unnecessary transmissions if remote
doesn't have the resources, it's much lower level than Greybus.

> Also, this patch adds Bluetooth, you talk about Z-Wave and Zigbee. But
> the EFR32 is a general purpose SoC, with I2C, SPI, PWM, UART. Greybus
> has support for these, although the code is current in staging. But
> for staging code, it is actually pretty good.

I agree with you that the EFR32 is a general purpose SoC and exposing
all available peripherals would be great, but most customers buy it as
an RCP module with one or more radio stacks enabled, and that's the
situation we're trying to address. Maybe I introduced a framework with
custom bus, drivers and endpoints where it was unnecessary, the goal is
not to be super generic but only to support coexistence of our radio
stacks.
Re: [RFC net-next 00/15] Add support for Silicon Labs CPC
Posted by Andrew Lunn 7 months, 1 week ago
On Tue, May 13, 2025 at 05:15:20PM -0400, Damien Riégel wrote:
> On Mon May 12, 2025 at 1:07 PM EDT, Andrew Lunn wrote:
> > On Sun, May 11, 2025 at 09:27:33PM -0400, Damien Riégel wrote:
> >> Hi,
> >>
> >>
> >> This patchset brings initial support for Silicon Labs CPC protocol,
> >> standing for Co-Processor Communication. This protocol is used by the
> >> EFR32 Series [1]. These devices offer a variety for radio protocols,
> >> such as Bluetooth, Z-Wave, Zigbee [2].
> >
> > Before we get too deep into the details of the patches, please could
> > you do a compare/contrast to Greybus.
> 
> Thank you for the prompt feedback on the RFC. We took a look at Greybus
> in the past and it didn't seem to fit our needs. One of the main use
> case that drove the development of CPC was to support WiFi (in
> coexistence with other radio stacks) over SDIO, and get the maximum
> throughput possible. We concluded that to achieve this we would need
> packet aggregation, as sending one frame at a time over SDIO is
> wasteful, and managing Radio Co-Processor available buffers, as sending
> frames that the RCP is not able to process would degrade performance.
> 
> Greybus don't seem to offer these capabilities. It seems to be more
> geared towards implementing RPC, where the host would send a command,
> and then wait for the device to execute it and to respond. For Greybus'
> protocols that implement some "streaming" features like audio or video
> capture, the data streams go to an I2S or CSI interface, but it doesn't
> seem to go through a CPort. So it seems to act as a backbone to connect
> CPorts together, but high-throughput transfers happen on other types of
> links. CPC is more about moving data over a physical link, guaranteeing
> ordered delivery and avoiding unnecessary transmissions if remote
> doesn't have the resources, it's much lower level than Greybus.

As is said, i don't know Greybus too well. I hope its Maintainers can
comment on this.

> > Also, this patch adds Bluetooth, you talk about Z-Wave and Zigbee. But
> > the EFR32 is a general purpose SoC, with I2C, SPI, PWM, UART. Greybus
> > has support for these, although the code is current in staging. But
> > for staging code, it is actually pretty good.
> 
> I agree with you that the EFR32 is a general purpose SoC and exposing
> all available peripherals would be great, but most customers buy it as
> an RCP module with one or more radio stacks enabled, and that's the
> situation we're trying to address. Maybe I introduced a framework with
> custom bus, drivers and endpoints where it was unnecessary, the goal is
> not to be super generic but only to support coexistence of our radio
> stacks.

This leads to my next problem.

https://www.nordicsemi.com/-/media/Software-and-other-downloads/Product-Briefs/nRF5340-SoC-PB.pdf
Nordic Semiconductor has what appears to be a similar device.

https://www.microchip.com/en-us/products/wireless-connectivity/bluetooth-low-energy/microcontrollers
Microchip has a similar device as well.

https://www.ti.com/product/CC2674R10
TI has a similar device.

And maybe there are others?

Are we going to get a Silabs CPC, a Nordic CPC, a Microchip CPC, a TI
CPC, and an ACME CPC?

How do we end up with one implementation?

Maybe Greybus does not currently support your streaming use case too
well, but it is at least vendor neutral. Can it be extended for
streaming?

And maybe a dumb question... How do transfers get out of order over
SPI and SDIO? If you look at the Open Alliance TC6 specification for
Ethernet over SPI, it does not have any issues with ordering.

	 Andrew
Re: [RFC net-next 00/15] Add support for Silicon Labs CPC
Posted by Damien Riégel 7 months ago
On Tue May 13, 2025 at 5:53 PM EDT, Andrew Lunn wrote:
> On Tue, May 13, 2025 at 05:15:20PM -0400, Damien Riégel wrote:
>> On Mon May 12, 2025 at 1:07 PM EDT, Andrew Lunn wrote:
>> > On Sun, May 11, 2025 at 09:27:33PM -0400, Damien Riégel wrote:
>> >> Hi,
>> >>
>> >>
>> >> This patchset brings initial support for Silicon Labs CPC protocol,
>> >> standing for Co-Processor Communication. This protocol is used by the
>> >> EFR32 Series [1]. These devices offer a variety for radio protocols,
>> >> such as Bluetooth, Z-Wave, Zigbee [2].
>> >
>> > Before we get too deep into the details of the patches, please could
>> > you do a compare/contrast to Greybus.
>>
>> Thank you for the prompt feedback on the RFC. We took a look at Greybus
>> in the past and it didn't seem to fit our needs. One of the main use
>> case that drove the development of CPC was to support WiFi (in
>> coexistence with other radio stacks) over SDIO, and get the maximum
>> throughput possible. We concluded that to achieve this we would need
>> packet aggregation, as sending one frame at a time over SDIO is
>> wasteful, and managing Radio Co-Processor available buffers, as sending
>> frames that the RCP is not able to process would degrade performance.
>>
>> Greybus don't seem to offer these capabilities. It seems to be more
>> geared towards implementing RPC, where the host would send a command,
>> and then wait for the device to execute it and to respond. For Greybus'
>> protocols that implement some "streaming" features like audio or video
>> capture, the data streams go to an I2S or CSI interface, but it doesn't
>> seem to go through a CPort. So it seems to act as a backbone to connect
>> CPorts together, but high-throughput transfers happen on other types of
>> links. CPC is more about moving data over a physical link, guaranteeing
>> ordered delivery and avoiding unnecessary transmissions if remote
>> doesn't have the resources, it's much lower level than Greybus.
>
> As is said, i don't know Greybus too well. I hope its Maintainers can
> comment on this.
>
>> > Also, this patch adds Bluetooth, you talk about Z-Wave and Zigbee. But
>> > the EFR32 is a general purpose SoC, with I2C, SPI, PWM, UART. Greybus
>> > has support for these, although the code is current in staging. But
>> > for staging code, it is actually pretty good.
>>
>> I agree with you that the EFR32 is a general purpose SoC and exposing
>> all available peripherals would be great, but most customers buy it as
>> an RCP module with one or more radio stacks enabled, and that's the
>> situation we're trying to address. Maybe I introduced a framework with
>> custom bus, drivers and endpoints where it was unnecessary, the goal is
>> not to be super generic but only to support coexistence of our radio
>> stacks.
>
> This leads to my next problem.
>
> https://www.nordicsemi.com/-/media/Software-and-other-downloads/Product-Briefs/nRF5340-SoC-PB.pdf
> Nordic Semiconductor has what appears to be a similar device.
>
> https://www.microchip.com/en-us/products/wireless-connectivity/bluetooth-low-energy/microcontrollers
> Microchip has a similar device as well.
>
> https://www.ti.com/product/CC2674R10
> TI has a similar device.
>
> And maybe there are others?
>
> Are we going to get a Silabs CPC, a Nordic CPC, a Microchip CPC, a TI
> CPC, and an ACME CPC?
>
> How do we end up with one implementation?
>
> Maybe Greybus does not currently support your streaming use case too
> well, but it is at least vendor neutral. Can it be extended for
> streaming?

I get the sentiment that we don't want every single vendor to push their
own protocols that are ever so slightly different. To be honest, I don't
know if Greybus can be extended for that use case, or if it's something
they are interested in supporting. I've subscribed to greybus-dev so
hopefully my email will get through this time (previous one is pending
approval).

Unfortunately, we're deep down the CPC road, especially on the firmware
side. Blame on me for not sending the RFC sooner and getting feedback
earlier, but if we have to massively change our course of action we need
some degree of confidence that this is a viable alternative for
achieving high-throughput for WiFi over SDIO. I would really value any
input from the Greybus folks on this.

> And maybe a dumb question... How do transfers get out of order over
> SPI and SDIO? If you look at the Open Alliance TC6 specification for
> Ethernet over SPI, it does not have any issues with ordering.

Sorry I wasn't very clear about that. Of course packets are sent in
order but several packets can be sent at once before being acknowledged
and we might detect CRC errors on one of these packets. CPC takes care
of only delivering valid packets, and packets that come after the one
with CRC error won't be delivered to upper layer until the faulty one is
retransmitted.

I took a look at the specification you mentioned and they completely
delegate that to upper layers:

    When transmit or receive frame bit errors are detected on the SPI,
    the retry of frames is performed by higher protocol layers that are
    beyond the scope of this specification. [1]

Our goal was to be agnostic of stacks on top of CPC and reliably
transmit frames. To give a bit of context, CPC was originally derived
from HDLC, which features detecting sequence gaps and retransmission. On
top of that, we've now added the mechanism I mentioned in previous
emails that throttle the host when the RCP is not ready to receive and
process frames on an endpoint.

[1] https://opensig.org/wp-content/uploads/2023/12/OPEN_Alliance_10BASET1x_MAC-PHY_Serial_Interface_V1.1.pdf (Section 7.3.1)
Re: [RFC net-next 00/15] Add support for Silicon Labs CPC
Posted by Alex Elder 6 months, 4 weeks ago
On 5/14/25 5:52 PM, Damien Riégel wrote:
> On Tue May 13, 2025 at 5:53 PM EDT, Andrew Lunn wrote:
>> On Tue, May 13, 2025 at 05:15:20PM -0400, Damien Riégel wrote:
>>> On Mon May 12, 2025 at 1:07 PM EDT, Andrew Lunn wrote:
>>>> On Sun, May 11, 2025 at 09:27:33PM -0400, Damien Riégel wrote:
>>>>> Hi,
>>>>>
>>>>>
>>>>> This patchset brings initial support for Silicon Labs CPC protocol,
>>>>> standing for Co-Processor Communication. This protocol is used by the
>>>>> EFR32 Series [1]. These devices offer a variety for radio protocols,
>>>>> such as Bluetooth, Z-Wave, Zigbee [2].
>>>>
>>>> Before we get too deep into the details of the patches, please could
>>>> you do a compare/contrast to Greybus.
>>>
>>> Thank you for the prompt feedback on the RFC. We took a look at Greybus
>>> in the past and it didn't seem to fit our needs. One of the main use
>>> case that drove the development of CPC was to support WiFi (in
>>> coexistence with other radio stacks) over SDIO, and get the maximum
>>> throughput possible. We concluded that to achieve this we would need
>>> packet aggregation, as sending one frame at a time over SDIO is
>>> wasteful, and managing Radio Co-Processor available buffers, as sending
>>> frames that the RCP is not able to process would degrade performance.
>>>
>>> Greybus don't seem to offer these capabilities. It seems to be more
>>> geared towards implementing RPC, where the host would send a command,
>>> and then wait for the device to execute it and to respond. For Greybus'
>>> protocols that implement some "streaming" features like audio or video
>>> capture, the data streams go to an I2S or CSI interface, but it doesn't
>>> seem to go through a CPort. So it seems to act as a backbone to connect
>>> CPorts together, but high-throughput transfers happen on other types of
>>> links. CPC is more about moving data over a physical link, guaranteeing
>>> ordered delivery and avoiding unnecessary transmissions if remote
>>> doesn't have the resources, it's much lower level than Greybus.
>>
>> As is said, i don't know Greybus too well. I hope its Maintainers can
>> comment on this.
>>
>>>> Also, this patch adds Bluetooth, you talk about Z-Wave and Zigbee. But
>>>> the EFR32 is a general purpose SoC, with I2C, SPI, PWM, UART. Greybus
>>>> has support for these, although the code is current in staging. But
>>>> for staging code, it is actually pretty good.
>>>
>>> I agree with you that the EFR32 is a general purpose SoC and exposing
>>> all available peripherals would be great, but most customers buy it as
>>> an RCP module with one or more radio stacks enabled, and that's the
>>> situation we're trying to address. Maybe I introduced a framework with
>>> custom bus, drivers and endpoints where it was unnecessary, the goal is
>>> not to be super generic but only to support coexistence of our radio
>>> stacks.
>>
>> This leads to my next problem.
>>
>> https://www.nordicsemi.com/-/media/Software-and-other-downloads/Product-Briefs/nRF5340-SoC-PB.pdf
>> Nordic Semiconductor has what appears to be a similar device.
>>
>> https://www.microchip.com/en-us/products/wireless-connectivity/bluetooth-low-energy/microcontrollers
>> Microchip has a similar device as well.
>>
>> https://www.ti.com/product/CC2674R10
>> TI has a similar device.
>>
>> And maybe there are others?
>>
>> Are we going to get a Silabs CPC, a Nordic CPC, a Microchip CPC, a TI
>> CPC, and an ACME CPC?
>>
>> How do we end up with one implementation?
>>
>> Maybe Greybus does not currently support your streaming use case too
>> well, but it is at least vendor neutral. Can it be extended for
>> streaming?
> 
> I get the sentiment that we don't want every single vendor to push their
> own protocols that are ever so slightly different. To be honest, I don't
> know if Greybus can be extended for that use case, or if it's something
> they are interested in supporting. I've subscribed to greybus-dev so
> hopefully my email will get through this time (previous one is pending
> approval).

Greybus was designed for a particular platform, but the intention
was to make it extensible.  It can be extended with new protocols,
and I don't think anyone is opposed to that.

> Unfortunately, we're deep down the CPC road, especially on the firmware
> side. Blame on me for not sending the RFC sooner and getting feedback
> earlier, but if we have to massively change our course of action we need
> some degree of confidence that this is a viable alternative for
> achieving high-throughput for WiFi over SDIO. I would really value any
> input from the Greybus folks on this.

I kind of assumed this.  I'm sure Andrew's message was not that
welcome for that reason, but he's right about trying to agree on
something in common if possible.  If Greybus can solve all your
problems, the maintainers will support the code being modified
to support what's needed.

(To be clear, I don't assume Greybus will solve all your problems.
For example, UniPro provides a reliable transport, so that's what
Greybus currently expects.)

I have no input on your throughput question at the moment.

					-Alex

>> And maybe a dumb question... How do transfers get out of order over
>> SPI and SDIO? If you look at the Open Alliance TC6 specification for
>> Ethernet over SPI, it does not have any issues with ordering.
> 
> Sorry I wasn't very clear about that. Of course packets are sent in
> order but several packets can be sent at once before being acknowledged
> and we might detect CRC errors on one of these packets. CPC takes care
> of only delivering valid packets, and packets that come after the one
> with CRC error won't be delivered to upper layer until the faulty one is
> retransmitted.
> 
> I took a look at the specification you mentioned and they completely
> delegate that to upper layers:
> 
>      When transmit or receive frame bit errors are detected on the SPI,
>      the retry of frames is performed by higher protocol layers that are
>      beyond the scope of this specification. [1]
> 
> Our goal was to be agnostic of stacks on top of CPC and reliably
> transmit frames. To give a bit of context, CPC was originally derived
> from HDLC, which features detecting sequence gaps and retransmission. On
> top of that, we've now added the mechanism I mentioned in previous
> emails that throttle the host when the RCP is not ready to receive and
> process frames on an endpoint.
> 
> [1] https://opensig.org/wp-content/uploads/2023/12/OPEN_Alliance_10BASET1x_MAC-PHY_Serial_Interface_V1.1.pdf (Section 7.3.1)

Re: [RFC net-next 00/15] Add support for Silicon Labs CPC
Posted by Damien Riégel 6 months, 3 weeks ago
On Wed May 21, 2025 at 10:46 PM EDT, Alex Elder wrote:
> On 5/14/25 5:52 PM, Damien Riégel wrote:
>> On Tue May 13, 2025 at 5:53 PM EDT, Andrew Lunn wrote:
>>> On Tue, May 13, 2025 at 05:15:20PM -0400, Damien Riégel wrote:
>>>> On Mon May 12, 2025 at 1:07 PM EDT, Andrew Lunn wrote:
>>>>> On Sun, May 11, 2025 at 09:27:33PM -0400, Damien Riégel wrote:
>>>>>> Hi,
>>>>>>
>>>>>>
>>>>>> This patchset brings initial support for Silicon Labs CPC protocol,
>>>>>> standing for Co-Processor Communication. This protocol is used by the
>>>>>> EFR32 Series [1]. These devices offer a variety for radio protocols,
>>>>>> such as Bluetooth, Z-Wave, Zigbee [2].
>>>>>
>>>>> Before we get too deep into the details of the patches, please could
>>>>> you do a compare/contrast to Greybus.
>>>>
>>>> Thank you for the prompt feedback on the RFC. We took a look at Greybus
>>>> in the past and it didn't seem to fit our needs. One of the main use
>>>> case that drove the development of CPC was to support WiFi (in
>>>> coexistence with other radio stacks) over SDIO, and get the maximum
>>>> throughput possible. We concluded that to achieve this we would need
>>>> packet aggregation, as sending one frame at a time over SDIO is
>>>> wasteful, and managing Radio Co-Processor available buffers, as sending
>>>> frames that the RCP is not able to process would degrade performance.
>>>>
>>>> Greybus don't seem to offer these capabilities. It seems to be more
>>>> geared towards implementing RPC, where the host would send a command,
>>>> and then wait for the device to execute it and to respond. For Greybus'
>>>> protocols that implement some "streaming" features like audio or video
>>>> capture, the data streams go to an I2S or CSI interface, but it doesn't
>>>> seem to go through a CPort. So it seems to act as a backbone to connect
>>>> CPorts together, but high-throughput transfers happen on other types of
>>>> links. CPC is more about moving data over a physical link, guaranteeing
>>>> ordered delivery and avoiding unnecessary transmissions if remote
>>>> doesn't have the resources, it's much lower level than Greybus.
>>>
>>> As is said, i don't know Greybus too well. I hope its Maintainers can
>>> comment on this.
>>>
>>>>> Also, this patch adds Bluetooth, you talk about Z-Wave and Zigbee. But
>>>>> the EFR32 is a general purpose SoC, with I2C, SPI, PWM, UART. Greybus
>>>>> has support for these, although the code is current in staging. But
>>>>> for staging code, it is actually pretty good.
>>>>
>>>> I agree with you that the EFR32 is a general purpose SoC and exposing
>>>> all available peripherals would be great, but most customers buy it as
>>>> an RCP module with one or more radio stacks enabled, and that's the
>>>> situation we're trying to address. Maybe I introduced a framework with
>>>> custom bus, drivers and endpoints where it was unnecessary, the goal is
>>>> not to be super generic but only to support coexistence of our radio
>>>> stacks.
>>>
>>> This leads to my next problem.
>>>
>>> https://www.nordicsemi.com/-/media/Software-and-other-downloads/Product-Briefs/nRF5340-SoC-PB.pdf
>>> Nordic Semiconductor has what appears to be a similar device.
>>>
>>> https://www.microchip.com/en-us/products/wireless-connectivity/bluetooth-low-energy/microcontrollers
>>> Microchip has a similar device as well.
>>>
>>> https://www.ti.com/product/CC2674R10
>>> TI has a similar device.
>>>
>>> And maybe there are others?
>>>
>>> Are we going to get a Silabs CPC, a Nordic CPC, a Microchip CPC, a TI
>>> CPC, and an ACME CPC?
>>>
>>> How do we end up with one implementation?
>>>
>>> Maybe Greybus does not currently support your streaming use case too
>>> well, but it is at least vendor neutral. Can it be extended for
>>> streaming?
>>
>> I get the sentiment that we don't want every single vendor to push their
>> own protocols that are ever so slightly different. To be honest, I don't
>> know if Greybus can be extended for that use case, or if it's something
>> they are interested in supporting. I've subscribed to greybus-dev so
>> hopefully my email will get through this time (previous one is pending
>> approval).
>
> Greybus was designed for a particular platform, but the intention
> was to make it extensible.  It can be extended with new protocols,
> and I don't think anyone is opposed to that.
>
>> Unfortunately, we're deep down the CPC road, especially on the firmware
>> side. Blame on me for not sending the RFC sooner and getting feedback
>> earlier, but if we have to massively change our course of action we need
>> some degree of confidence that this is a viable alternative for
>> achieving high-throughput for WiFi over SDIO. I would really value any
>> input from the Greybus folks on this.
>
> I kind of assumed this.  I'm sure Andrew's message was not that
> welcome for that reason, but he's right about trying to agree on
> something in common if possible.  If Greybus can solve all your
> problems, the maintainers will support the code being modified
> to support what's needed.
>
> (To be clear, I don't assume Greybus will solve all your problems.
> For example, UniPro provides a reliable transport, so that's what
> Greybus currently expects.)

I don't really know about UniPro and I'm learning about it as the
discussion goes, but one of the point listed on Wikipedia is
"reliability - data errors detected and correctable via retransmission"

This is where CPC could come in, probably with a different name and a
reduced scope, as a way to implement reliable transmission over UART,
SPI, SDIO, by ensuring data errors are detected and packets
retransmitted if necessary, and be limited to that.

What's missing for us in Greybus, as discussed in a subthread, is
asynchronous operations to fit better with the network stack, but I
think that could easily be added.
Re: [RFC net-next 00/15] Add support for Silicon Labs CPC
Posted by Andrew Lunn 6 months, 3 weeks ago
> I don't really know about UniPro and I'm learning about it as the
> discussion goes, but one of the point listed on Wikipedia is
> "reliability - data errors detected and correctable via retransmission"
> 
> This is where CPC could come in, probably with a different name and a
> reduced scope, as a way to implement reliable transmission over UART,
> SPI, SDIO, by ensuring data errors are detected and packets
> retransmitted if necessary, and be limited to that.

You mentioned HDLC in the past. What is interesting is that HDLC is
actually used in Greybus:

https://elixir.bootlin.com/linux/v6.15-rc7/source/drivers/greybus/gb-beagleplay.c#L581

I've no idea if its just for framing, or if there is also retries on
errors, S-frames with flow and error control etc. There might be code
you can reuse here.

	Andrew
Re: [RFC net-next 00/15] Add support for Silicon Labs CPC
Posted by Damien Riégel 6 months, 3 weeks ago
On Fri May 23, 2025 at 4:06 PM EDT, Andrew Lunn wrote:
>> I don't really know about UniPro and I'm learning about it as the
>> discussion goes, but one of the point listed on Wikipedia is
>> "reliability - data errors detected and correctable via retransmission"
>>
>> This is where CPC could come in, probably with a different name and a
>> reduced scope, as a way to implement reliable transmission over UART,
>> SPI, SDIO, by ensuring data errors are detected and packets
>> retransmitted if necessary, and be limited to that.
>
> You mentioned HDLC in the past. What is interesting is that HDLC is
> actually used in Greybus:
>
> https://elixir.bootlin.com/linux/v6.15-rc7/source/drivers/greybus/gb-beagleplay.c*L581
>
> I've no idea if its just for framing, or if there is also retries on
> errors, S-frames with flow and error control etc. There might be code
> you can reuse here.

Yeah I've seen it when looking at Greybus, from what I could see it's
only framing. There is a CRC check though, frames received that don't
pass that check are not passed to Greybus layer.

Another aspect we would like to support is buffer management. In our
implementation, each endpoint has its dedicated pool of RX buffers and
the number of available buffers is advertised to the remote, so the
Linux driver can delay transmission of packets if endpoints are out of
RX buffers.

We decide to implement that mostly because that would get us the best
throughtput possible. Sending a packet to an endpoint that doesn't have
room for it means the packet will be dropped and we have to wait for a
retransmission to occur, which degrades performance.
Re: [RFC net-next 00/15] Add support for Silicon Labs CPC
Posted by Greg Kroah-Hartman 7 months ago
On Wed, May 14, 2025 at 06:52:27PM -0400, Damien Riégel wrote:
> On Tue May 13, 2025 at 5:53 PM EDT, Andrew Lunn wrote:
> > On Tue, May 13, 2025 at 05:15:20PM -0400, Damien Riégel wrote:
> >> On Mon May 12, 2025 at 1:07 PM EDT, Andrew Lunn wrote:
> >> > On Sun, May 11, 2025 at 09:27:33PM -0400, Damien Riégel wrote:
> >> >> Hi,
> >> >>
> >> >>
> >> >> This patchset brings initial support for Silicon Labs CPC protocol,
> >> >> standing for Co-Processor Communication. This protocol is used by the
> >> >> EFR32 Series [1]. These devices offer a variety for radio protocols,
> >> >> such as Bluetooth, Z-Wave, Zigbee [2].
> >> >
> >> > Before we get too deep into the details of the patches, please could
> >> > you do a compare/contrast to Greybus.
> >>
> >> Thank you for the prompt feedback on the RFC. We took a look at Greybus
> >> in the past and it didn't seem to fit our needs. One of the main use
> >> case that drove the development of CPC was to support WiFi (in
> >> coexistence with other radio stacks) over SDIO, and get the maximum
> >> throughput possible. We concluded that to achieve this we would need
> >> packet aggregation, as sending one frame at a time over SDIO is
> >> wasteful, and managing Radio Co-Processor available buffers, as sending
> >> frames that the RCP is not able to process would degrade performance.
> >>
> >> Greybus don't seem to offer these capabilities. It seems to be more
> >> geared towards implementing RPC, where the host would send a command,
> >> and then wait for the device to execute it and to respond. For Greybus'
> >> protocols that implement some "streaming" features like audio or video
> >> capture, the data streams go to an I2S or CSI interface, but it doesn't
> >> seem to go through a CPort. So it seems to act as a backbone to connect
> >> CPorts together, but high-throughput transfers happen on other types of
> >> links. CPC is more about moving data over a physical link, guaranteeing
> >> ordered delivery and avoiding unnecessary transmissions if remote
> >> doesn't have the resources, it's much lower level than Greybus.
> >
> > As is said, i don't know Greybus too well. I hope its Maintainers can
> > comment on this.
> >
> >> > Also, this patch adds Bluetooth, you talk about Z-Wave and Zigbee. But
> >> > the EFR32 is a general purpose SoC, with I2C, SPI, PWM, UART. Greybus
> >> > has support for these, although the code is current in staging. But
> >> > for staging code, it is actually pretty good.
> >>
> >> I agree with you that the EFR32 is a general purpose SoC and exposing
> >> all available peripherals would be great, but most customers buy it as
> >> an RCP module with one or more radio stacks enabled, and that's the
> >> situation we're trying to address. Maybe I introduced a framework with
> >> custom bus, drivers and endpoints where it was unnecessary, the goal is
> >> not to be super generic but only to support coexistence of our radio
> >> stacks.
> >
> > This leads to my next problem.
> >
> > https://www.nordicsemi.com/-/media/Software-and-other-downloads/Product-Briefs/nRF5340-SoC-PB.pdf
> > Nordic Semiconductor has what appears to be a similar device.
> >
> > https://www.microchip.com/en-us/products/wireless-connectivity/bluetooth-low-energy/microcontrollers
> > Microchip has a similar device as well.
> >
> > https://www.ti.com/product/CC2674R10
> > TI has a similar device.
> >
> > And maybe there are others?
> >
> > Are we going to get a Silabs CPC, a Nordic CPC, a Microchip CPC, a TI
> > CPC, and an ACME CPC?
> >
> > How do we end up with one implementation?
> >
> > Maybe Greybus does not currently support your streaming use case too
> > well, but it is at least vendor neutral. Can it be extended for
> > streaming?
> 
> I get the sentiment that we don't want every single vendor to push their
> own protocols that are ever so slightly different. To be honest, I don't
> know if Greybus can be extended for that use case, or if it's something
> they are interested in supporting. I've subscribed to greybus-dev so
> hopefully my email will get through this time (previous one is pending
> approval).
> 
> Unfortunately, we're deep down the CPC road, especially on the firmware
> side. Blame on me for not sending the RFC sooner and getting feedback
> earlier, but if we have to massively change our course of action we need
> some degree of confidence that this is a viable alternative for
> achieving high-throughput for WiFi over SDIO. I would really value any
> input from the Greybus folks on this.

So what you are looking for is a standard way to "tunnel" SDIO over some
other physical transport, right?  If so, then yes, please use Greybus as
that is exactly what it was designed for.

If there is a throughput issue with the sdio implementation on Greybus,
we can address it by fixing up the code to go faster, I don't recall
there ever being any real benchmarking happening for that protocol in
the past as the physical layer that we were using for Greybus at the
time (MIPI) was very fast, the bottleneck was usually either the host
controller we were using for Greybus, OR on the firmware side in the
device itself (i.e. turning Greybus packets into SDIO commands, as SDIO
was pretty slow.)

thanks,

greg k-h
Re: [RFC net-next 00/15] Add support for Silicon Labs CPC
Posted by Damien Riégel 7 months ago
On Thu May 15, 2025 at 3:49 AM EDT, Greg Kroah-Hartman wrote:
> On Wed, May 14, 2025 at 06:52:27PM -0400, Damien Riégel wrote:
>> On Tue May 13, 2025 at 5:53 PM EDT, Andrew Lunn wrote:
>> > On Tue, May 13, 2025 at 05:15:20PM -0400, Damien Riégel wrote:
>> >> On Mon May 12, 2025 at 1:07 PM EDT, Andrew Lunn wrote:
>> >> > On Sun, May 11, 2025 at 09:27:33PM -0400, Damien Riégel wrote:
>> >> >> Hi,
>> >> >>
>> >> >>
>> >> >> This patchset brings initial support for Silicon Labs CPC protocol,
>> >> >> standing for Co-Processor Communication. This protocol is used by the
>> >> >> EFR32 Series [1]. These devices offer a variety for radio protocols,
>> >> >> such as Bluetooth, Z-Wave, Zigbee [2].
>> >> >
>> >> > Before we get too deep into the details of the patches, please could
>> >> > you do a compare/contrast to Greybus.
>> >>
>> >> Thank you for the prompt feedback on the RFC. We took a look at Greybus
>> >> in the past and it didn't seem to fit our needs. One of the main use
>> >> case that drove the development of CPC was to support WiFi (in
>> >> coexistence with other radio stacks) over SDIO, and get the maximum
>> >> throughput possible. We concluded that to achieve this we would need
>> >> packet aggregation, as sending one frame at a time over SDIO is
>> >> wasteful, and managing Radio Co-Processor available buffers, as sending
>> >> frames that the RCP is not able to process would degrade performance.
>> >>
>> >> Greybus don't seem to offer these capabilities. It seems to be more
>> >> geared towards implementing RPC, where the host would send a command,
>> >> and then wait for the device to execute it and to respond. For Greybus'
>> >> protocols that implement some "streaming" features like audio or video
>> >> capture, the data streams go to an I2S or CSI interface, but it doesn't
>> >> seem to go through a CPort. So it seems to act as a backbone to connect
>> >> CPorts together, but high-throughput transfers happen on other types of
>> >> links. CPC is more about moving data over a physical link, guaranteeing
>> >> ordered delivery and avoiding unnecessary transmissions if remote
>> >> doesn't have the resources, it's much lower level than Greybus.
>> >
>> > As is said, i don't know Greybus too well. I hope its Maintainers can
>> > comment on this.
>> >
>> >> > Also, this patch adds Bluetooth, you talk about Z-Wave and Zigbee. But
>> >> > the EFR32 is a general purpose SoC, with I2C, SPI, PWM, UART. Greybus
>> >> > has support for these, although the code is current in staging. But
>> >> > for staging code, it is actually pretty good.
>> >>
>> >> I agree with you that the EFR32 is a general purpose SoC and exposing
>> >> all available peripherals would be great, but most customers buy it as
>> >> an RCP module with one or more radio stacks enabled, and that's the
>> >> situation we're trying to address. Maybe I introduced a framework with
>> >> custom bus, drivers and endpoints where it was unnecessary, the goal is
>> >> not to be super generic but only to support coexistence of our radio
>> >> stacks.
>> >
>> > This leads to my next problem.
>> >
>> > https://www.nordicsemi.com/-/media/Software-and-other-downloads/Product-Briefs/nRF5340-SoC-PB.pdf
>> > Nordic Semiconductor has what appears to be a similar device.
>> >
>> > https://www.microchip.com/en-us/products/wireless-connectivity/bluetooth-low-energy/microcontrollers
>> > Microchip has a similar device as well.
>> >
>> > https://www.ti.com/product/CC2674R10
>> > TI has a similar device.
>> >
>> > And maybe there are others?
>> >
>> > Are we going to get a Silabs CPC, a Nordic CPC, a Microchip CPC, a TI
>> > CPC, and an ACME CPC?
>> >
>> > How do we end up with one implementation?
>> >
>> > Maybe Greybus does not currently support your streaming use case too
>> > well, but it is at least vendor neutral. Can it be extended for
>> > streaming?
>>
>> I get the sentiment that we don't want every single vendor to push their
>> own protocols that are ever so slightly different. To be honest, I don't
>> know if Greybus can be extended for that use case, or if it's something
>> they are interested in supporting. I've subscribed to greybus-dev so
>> hopefully my email will get through this time (previous one is pending
>> approval).
>>
>> Unfortunately, we're deep down the CPC road, especially on the firmware
>> side. Blame on me for not sending the RFC sooner and getting feedback
>> earlier, but if we have to massively change our course of action we need
>> some degree of confidence that this is a viable alternative for
>> achieving high-throughput for WiFi over SDIO. I would really value any
>> input from the Greybus folks on this.
>
> So what you are looking for is a standard way to "tunnel" SDIO over some
> other physical transport, right?  If so, then yes, please use Greybus as
> that is exactly what it was designed for.

No, we want to use SDIO as physical transport. To use the Greybus
terminology, our MCUs would act as modules with a single interface, and
that interface would have "radio" bundles for each of the supported
stack.

So we want to expose our radio stacks in Linux and Greybus doesn't
define protocols for that, so that's kind of uncharted territories and
we were wondering if Greybus would be the right tool for that. I hope
the situation is a bit clearer now.
Re: [RFC net-next 00/15] Add support for Silicon Labs CPC
Posted by Greg Kroah-Hartman 7 months ago
On Thu, May 15, 2025 at 11:00:39AM -0400, Damien Riégel wrote:
> On Thu May 15, 2025 at 3:49 AM EDT, Greg Kroah-Hartman wrote:
> > On Wed, May 14, 2025 at 06:52:27PM -0400, Damien Riégel wrote:
> >> On Tue May 13, 2025 at 5:53 PM EDT, Andrew Lunn wrote:
> >> > On Tue, May 13, 2025 at 05:15:20PM -0400, Damien Riégel wrote:
> >> >> On Mon May 12, 2025 at 1:07 PM EDT, Andrew Lunn wrote:
> >> >> > On Sun, May 11, 2025 at 09:27:33PM -0400, Damien Riégel wrote:
> >> >> >> Hi,
> >> >> >>
> >> >> >>
> >> >> >> This patchset brings initial support for Silicon Labs CPC protocol,
> >> >> >> standing for Co-Processor Communication. This protocol is used by the
> >> >> >> EFR32 Series [1]. These devices offer a variety for radio protocols,
> >> >> >> such as Bluetooth, Z-Wave, Zigbee [2].
> >> >> >
> >> >> > Before we get too deep into the details of the patches, please could
> >> >> > you do a compare/contrast to Greybus.
> >> >>
> >> >> Thank you for the prompt feedback on the RFC. We took a look at Greybus
> >> >> in the past and it didn't seem to fit our needs. One of the main use
> >> >> case that drove the development of CPC was to support WiFi (in
> >> >> coexistence with other radio stacks) over SDIO, and get the maximum
> >> >> throughput possible. We concluded that to achieve this we would need
> >> >> packet aggregation, as sending one frame at a time over SDIO is
> >> >> wasteful, and managing Radio Co-Processor available buffers, as sending
> >> >> frames that the RCP is not able to process would degrade performance.
> >> >>
> >> >> Greybus don't seem to offer these capabilities. It seems to be more
> >> >> geared towards implementing RPC, where the host would send a command,
> >> >> and then wait for the device to execute it and to respond. For Greybus'
> >> >> protocols that implement some "streaming" features like audio or video
> >> >> capture, the data streams go to an I2S or CSI interface, but it doesn't
> >> >> seem to go through a CPort. So it seems to act as a backbone to connect
> >> >> CPorts together, but high-throughput transfers happen on other types of
> >> >> links. CPC is more about moving data over a physical link, guaranteeing
> >> >> ordered delivery and avoiding unnecessary transmissions if remote
> >> >> doesn't have the resources, it's much lower level than Greybus.
> >> >
> >> > As is said, i don't know Greybus too well. I hope its Maintainers can
> >> > comment on this.
> >> >
> >> >> > Also, this patch adds Bluetooth, you talk about Z-Wave and Zigbee. But
> >> >> > the EFR32 is a general purpose SoC, with I2C, SPI, PWM, UART. Greybus
> >> >> > has support for these, although the code is current in staging. But
> >> >> > for staging code, it is actually pretty good.
> >> >>
> >> >> I agree with you that the EFR32 is a general purpose SoC and exposing
> >> >> all available peripherals would be great, but most customers buy it as
> >> >> an RCP module with one or more radio stacks enabled, and that's the
> >> >> situation we're trying to address. Maybe I introduced a framework with
> >> >> custom bus, drivers and endpoints where it was unnecessary, the goal is
> >> >> not to be super generic but only to support coexistence of our radio
> >> >> stacks.
> >> >
> >> > This leads to my next problem.
> >> >
> >> > https://www.nordicsemi.com/-/media/Software-and-other-downloads/Product-Briefs/nRF5340-SoC-PB.pdf
> >> > Nordic Semiconductor has what appears to be a similar device.
> >> >
> >> > https://www.microchip.com/en-us/products/wireless-connectivity/bluetooth-low-energy/microcontrollers
> >> > Microchip has a similar device as well.
> >> >
> >> > https://www.ti.com/product/CC2674R10
> >> > TI has a similar device.
> >> >
> >> > And maybe there are others?
> >> >
> >> > Are we going to get a Silabs CPC, a Nordic CPC, a Microchip CPC, a TI
> >> > CPC, and an ACME CPC?
> >> >
> >> > How do we end up with one implementation?
> >> >
> >> > Maybe Greybus does not currently support your streaming use case too
> >> > well, but it is at least vendor neutral. Can it be extended for
> >> > streaming?
> >>
> >> I get the sentiment that we don't want every single vendor to push their
> >> own protocols that are ever so slightly different. To be honest, I don't
> >> know if Greybus can be extended for that use case, or if it's something
> >> they are interested in supporting. I've subscribed to greybus-dev so
> >> hopefully my email will get through this time (previous one is pending
> >> approval).
> >>
> >> Unfortunately, we're deep down the CPC road, especially on the firmware
> >> side. Blame on me for not sending the RFC sooner and getting feedback
> >> earlier, but if we have to massively change our course of action we need
> >> some degree of confidence that this is a viable alternative for
> >> achieving high-throughput for WiFi over SDIO. I would really value any
> >> input from the Greybus folks on this.
> >
> > So what you are looking for is a standard way to "tunnel" SDIO over some
> > other physical transport, right?  If so, then yes, please use Greybus as
> > that is exactly what it was designed for.
> 
> No, we want to use SDIO as physical transport. To use the Greybus
> terminology, our MCUs would act as modules with a single interface, and
> that interface would have "radio" bundles for each of the supported
> stack.
> 
> So we want to expose our radio stacks in Linux and Greybus doesn't
> define protocols for that, so that's kind of uncharted territories and
> we were wondering if Greybus would be the right tool for that. I hope
> the situation is a bit clearer now.

Yes, greybus does not expose a "wifi" protocol as that is way too device
specific, sorry.

So this just would be like any other normal SDIO wifi device then,
shouldn't be anything special, right?

thanks,

greg k-h
Re: [RFC net-next 00/15] Add support for Silicon Labs CPC
Posted by Damien Riégel 7 months ago
On Fri May 16, 2025 at 3:51 AM EDT, Greg Kroah-Hartman wrote:
> On Thu, May 15, 2025 at 11:00:39AM -0400, Damien Riégel wrote:
>> On Thu May 15, 2025 at 3:49 AM EDT, Greg Kroah-Hartman wrote:
>> > On Wed, May 14, 2025 at 06:52:27PM -0400, Damien Riégel wrote:
>> >> On Tue May 13, 2025 at 5:53 PM EDT, Andrew Lunn wrote:
>> >> > On Tue, May 13, 2025 at 05:15:20PM -0400, Damien Riégel wrote:
>> >> >> On Mon May 12, 2025 at 1:07 PM EDT, Andrew Lunn wrote:
>> >> >> > On Sun, May 11, 2025 at 09:27:33PM -0400, Damien Riégel wrote:
>> >> >> >> Hi,
>> >> >> >>
>> >> >> >>
>> >> >> >> This patchset brings initial support for Silicon Labs CPC protocol,
>> >> >> >> standing for Co-Processor Communication. This protocol is used by the
>> >> >> >> EFR32 Series [1]. These devices offer a variety for radio protocols,
>> >> >> >> such as Bluetooth, Z-Wave, Zigbee [2].
>> >> >> >
>> >> >> > Before we get too deep into the details of the patches, please could
>> >> >> > you do a compare/contrast to Greybus.
>> >> >>
>> >> >> Thank you for the prompt feedback on the RFC. We took a look at Greybus
>> >> >> in the past and it didn't seem to fit our needs. One of the main use
>> >> >> case that drove the development of CPC was to support WiFi (in
>> >> >> coexistence with other radio stacks) over SDIO, and get the maximum
>> >> >> throughput possible. We concluded that to achieve this we would need
>> >> >> packet aggregation, as sending one frame at a time over SDIO is
>> >> >> wasteful, and managing Radio Co-Processor available buffers, as sending
>> >> >> frames that the RCP is not able to process would degrade performance.
>> >> >>
>> >> >> Greybus don't seem to offer these capabilities. It seems to be more
>> >> >> geared towards implementing RPC, where the host would send a command,
>> >> >> and then wait for the device to execute it and to respond. For Greybus'
>> >> >> protocols that implement some "streaming" features like audio or video
>> >> >> capture, the data streams go to an I2S or CSI interface, but it doesn't
>> >> >> seem to go through a CPort. So it seems to act as a backbone to connect
>> >> >> CPorts together, but high-throughput transfers happen on other types of
>> >> >> links. CPC is more about moving data over a physical link, guaranteeing
>> >> >> ordered delivery and avoiding unnecessary transmissions if remote
>> >> >> doesn't have the resources, it's much lower level than Greybus.
>> >> >
>> >> > As is said, i don't know Greybus too well. I hope its Maintainers can
>> >> > comment on this.
>> >> >
>> >> >> > Also, this patch adds Bluetooth, you talk about Z-Wave and Zigbee. But
>> >> >> > the EFR32 is a general purpose SoC, with I2C, SPI, PWM, UART. Greybus
>> >> >> > has support for these, although the code is current in staging. But
>> >> >> > for staging code, it is actually pretty good.
>> >> >>
>> >> >> I agree with you that the EFR32 is a general purpose SoC and exposing
>> >> >> all available peripherals would be great, but most customers buy it as
>> >> >> an RCP module with one or more radio stacks enabled, and that's the
>> >> >> situation we're trying to address. Maybe I introduced a framework with
>> >> >> custom bus, drivers and endpoints where it was unnecessary, the goal is
>> >> >> not to be super generic but only to support coexistence of our radio
>> >> >> stacks.
>> >> >
>> >> > This leads to my next problem.
>> >> >
>> >> > https://www.nordicsemi.com/-/media/Software-and-other-downloads/Product-Briefs/nRF5340-SoC-PB.pdf
>> >> > Nordic Semiconductor has what appears to be a similar device.
>> >> >
>> >> > https://www.microchip.com/en-us/products/wireless-connectivity/bluetooth-low-energy/microcontrollers
>> >> > Microchip has a similar device as well.
>> >> >
>> >> > https://www.ti.com/product/CC2674R10
>> >> > TI has a similar device.
>> >> >
>> >> > And maybe there are others?
>> >> >
>> >> > Are we going to get a Silabs CPC, a Nordic CPC, a Microchip CPC, a TI
>> >> > CPC, and an ACME CPC?
>> >> >
>> >> > How do we end up with one implementation?
>> >> >
>> >> > Maybe Greybus does not currently support your streaming use case too
>> >> > well, but it is at least vendor neutral. Can it be extended for
>> >> > streaming?
>> >>
>> >> I get the sentiment that we don't want every single vendor to push their
>> >> own protocols that are ever so slightly different. To be honest, I don't
>> >> know if Greybus can be extended for that use case, or if it's something
>> >> they are interested in supporting. I've subscribed to greybus-dev so
>> >> hopefully my email will get through this time (previous one is pending
>> >> approval).
>> >>
>> >> Unfortunately, we're deep down the CPC road, especially on the firmware
>> >> side. Blame on me for not sending the RFC sooner and getting feedback
>> >> earlier, but if we have to massively change our course of action we need
>> >> some degree of confidence that this is a viable alternative for
>> >> achieving high-throughput for WiFi over SDIO. I would really value any
>> >> input from the Greybus folks on this.
>> >
>> > So what you are looking for is a standard way to "tunnel" SDIO over some
>> > other physical transport, right?  If so, then yes, please use Greybus as
>> > that is exactly what it was designed for.
>>
>> No, we want to use SDIO as physical transport. To use the Greybus
>> terminology, our MCUs would act as modules with a single interface, and
>> that interface would have "radio" bundles for each of the supported
>> stack.
>>
>> So we want to expose our radio stacks in Linux and Greybus doesn't
>> define protocols for that, so that's kind of uncharted territories and
>> we were wondering if Greybus would be the right tool for that. I hope
>> the situation is a bit clearer now.
>
> Yes, greybus does not expose a "wifi" protocol as that is way too device
> specific, sorry.
>
> So this just would be like any other normal SDIO wifi device then,
> shouldn't be anything special, right?

Wifi is just one of the radio stacks that can be present but there can
be other radio stacks running on the same device and sharing the same
physical transport, like Bluetooth, Zigbee, or OpenThread. The goal of
CPC (our custom protocol) is to multiplex all these protocols over the
same physical bus.

I think Andrew pulled Greybus in the discussion because there is some
overlap between Greybus and CPC:
  - Greybus has bundles and CPorts, CPC only has "endpoints", which
    would be the equivalent of a bundle with a single cport
  - discoverability of Greybus bundles/CPC endpoints by the host
  - multiple bundles/endpoints might coexist in the same
    module/CPC-enabled device
  - bundles/endpoints are independent from each other and each has its
    own dedicated driver

Greybus goes a step further and specs some protocols like GPIO or UART.
CPC doesn't spec what goes over endpoints because it's geared towards
radio applications and as you said, it's very device/stack specific.
Once an endpoint is connected, CPC just passes a bidirectionnal stream
of data between the two ends, which are free to do whatever they want
with it. A good example of that is the bluetooth driver that's part of
this RFC [1]. I hope my explanations make sense.

[1] https://lore.kernel.org/netdev/20250512012748.79749-16-damien.riegel@silabs.com/T/#u
Re: [RFC net-next 00/15] Add support for Silicon Labs CPC
Posted by Andrew Lunn 7 months ago
> I think Andrew pulled Greybus in the discussion because there is some
> overlap between Greybus and CPC:
>   - Greybus has bundles and CPorts, CPC only has "endpoints", which
>     would be the equivalent of a bundle with a single cport
>   - discoverability of Greybus bundles/CPC endpoints by the host
>   - multiple bundles/endpoints might coexist in the same
>     module/CPC-enabled device
>   - bundles/endpoints are independent from each other and each has its
>     own dedicated driver
> 
> Greybus goes a step further and specs some protocols like GPIO or UART.
> CPC doesn't spec what goes over endpoints because it's geared towards
> radio applications and as you said, it's very device/stack specific.

Is it device specific? Look at your Bluetooth implementation. I don't
see anything device specific in it. That should work for any of the
vendors of similar chips to yours.

For 802.15.4, Linux defines:

struct ieee802154_ops {
        struct module   *owner;
        int             (*start)(struct ieee802154_hw *hw);
        void            (*stop)(struct ieee802154_hw *hw);
        int             (*xmit_sync)(struct ieee802154_hw *hw,
                                     struct sk_buff *skb);
        int             (*xmit_async)(struct ieee802154_hw *hw,
                                      struct sk_buff *skb);
        int             (*ed)(struct ieee802154_hw *hw, u8 *level);
        int             (*set_channel)(struct ieee802154_hw *hw, u8 page,
                                       u8 channel);
        int             (*set_hw_addr_filt)(struct ieee802154_hw *hw,
                                            struct ieee802154_hw_addr_filt *filt,
                                            unsigned long changed);
        int             (*set_txpower)(struct ieee802154_hw *hw, s32 mbm);
        int             (*set_lbt)(struct ieee802154_hw *hw, bool on);
        int             (*set_cca_mode)(struct ieee802154_hw *hw,
                                        const struct wpan_phy_cca *cca);
        int             (*set_cca_ed_level)(struct ieee802154_hw *hw, s32 mbm);
        int             (*set_csma_params)(struct ieee802154_hw *hw,
                                           u8 min_be, u8 max_be, u8 retries);
        int             (*set_frame_retries)(struct ieee802154_hw *hw,
                                             s8 retries);
        int             (*set_promiscuous_mode)(struct ieee802154_hw *hw,
                                                const bool on);
};

Many of these are optional, but this gives an abstract representation
of a device, which is should be possible to turn into a protocol
talked over a transport bus like SPI or SDIO.

This also comes back to my point of there being at least four vendors
of devices like yours. Linux does not want four or more
implementations of this, each 90% the same, just a different way of
converting this structure of operations into messages over a transport
bus.

You have to define the protocol. Mainline needs that so when the next
vendor comes along, we can point at your protocol and say that is how
it has to be implemented in Mainline. Make your firmware on the SoC
understand it.  You have the advantage that you are here first, you
get to define that protocol, but you do need to clearly define it.

You have listed how your implementation is similar to Greybus. You say
what is not so great is streaming, i.e. the bulk data transfer needed
to implement xmit_sync() and xmit_async() above. Greybus is too much
RPC based. RPCs are actually what you want for most of the operations
listed above, but i agree for data, in order to keep the transport
fully loaded, you want double buffering. However, that appears to be
possible with the current Greybus code.

gb_operation_unidirectional_timeout() says:

 * Note that successful send of a unidirectional operation does not imply that
 * the request as actually reached the remote end of the connection.
 */

So long as you are doing your memory management correctly, i don't see
why you cannot implement double buffering in the transport driver.

I also don't see why you cannot extend the Greybus upper API and add a
true gb_operation_unidirectional_async() call.

You also said that lots of small transfers are inefficient, and you
wanted to combine small high level messages into one big transport
layer message. This is something you frequently see with USB Ethernet
dongles. The Ethernet driver puts a number of small Ethernet packets
into one USB URB. The USB layer itself has no idea this is going on. I
don't see why the same cannot be done here, greybus itself does not
need to be aware of the packet consolidation.

	Andrew
Re: [RFC net-next 00/15] Add support for Silicon Labs CPC
Posted by Alex Elder 6 months, 4 weeks ago
On 5/18/25 10:23 AM, Andrew Lunn wrote:
>> I think Andrew pulled Greybus in the discussion because there is some
>> overlap between Greybus and CPC:
>>    - Greybus has bundles and CPorts, CPC only has "endpoints", which
>>      would be the equivalent of a bundle with a single cport
>>    - discoverability of Greybus bundles/CPC endpoints by the host
>>    - multiple bundles/endpoints might coexist in the same
>>      module/CPC-enabled device
>>    - bundles/endpoints are independent from each other and each has its
>>      own dedicated driver
>>
>> Greybus goes a step further and specs some protocols like GPIO or UART.
>> CPC doesn't spec what goes over endpoints because it's geared towards
>> radio applications and as you said, it's very device/stack specific.
> 
> Is it device specific? Look at your Bluetooth implementation. I don't
> see anything device specific in it. That should work for any of the
> vendors of similar chips to yours.
> 
> For 802.15.4, Linux defines:
> 
> struct ieee802154_ops {
>          struct module   *owner;
>          int             (*start)(struct ieee802154_hw *hw);
>          void            (*stop)(struct ieee802154_hw *hw);
>          int             (*xmit_sync)(struct ieee802154_hw *hw,
>                                       struct sk_buff *skb);
>          int             (*xmit_async)(struct ieee802154_hw *hw,
>                                        struct sk_buff *skb);
>          int             (*ed)(struct ieee802154_hw *hw, u8 *level);
>          int             (*set_channel)(struct ieee802154_hw *hw, u8 page,
>                                         u8 channel);
>          int             (*set_hw_addr_filt)(struct ieee802154_hw *hw,
>                                              struct ieee802154_hw_addr_filt *filt,
>                                              unsigned long changed);
>          int             (*set_txpower)(struct ieee802154_hw *hw, s32 mbm);
>          int             (*set_lbt)(struct ieee802154_hw *hw, bool on);
>          int             (*set_cca_mode)(struct ieee802154_hw *hw,
>                                          const struct wpan_phy_cca *cca);
>          int             (*set_cca_ed_level)(struct ieee802154_hw *hw, s32 mbm);
>          int             (*set_csma_params)(struct ieee802154_hw *hw,
>                                             u8 min_be, u8 max_be, u8 retries);
>          int             (*set_frame_retries)(struct ieee802154_hw *hw,
>                                               s8 retries);
>          int             (*set_promiscuous_mode)(struct ieee802154_hw *hw,
>                                                  const bool on);
> };
> 
> Many of these are optional, but this gives an abstract representation
> of a device, which is should be possible to turn into a protocol
> talked over a transport bus like SPI or SDIO.

This is essentially how Greybus does things.  It sets
up drivers on the Linux side that translate callback
functions into Greybus operations that get performed
on target hardware on the remote module.

> This also comes back to my point of there being at least four vendors
> of devices like yours. Linux does not want four or more
> implementations of this, each 90% the same, just a different way of
> converting this structure of operations into messages over a transport
> bus.
> 
> You have to define the protocol. Mainline needs that so when the next
> vendor comes along, we can point at your protocol and say that is how
> it has to be implemented in Mainline. Make your firmware on the SoC
> understand it.  You have the advantage that you are here first, you
> get to define that protocol, but you do need to clearly define it.

I agree with all of this.

> You have listed how your implementation is similar to Greybus. You say
> what is not so great is streaming, i.e. the bulk data transfer needed
> to implement xmit_sync() and xmit_async() above. Greybus is too much
> RPC based. RPCs are actually what you want for most of the operations
> listed above, but i agree for data, in order to keep the transport
> fully loaded, you want double buffering. However, that appears to be
> possible with the current Greybus code.
> 
> gb_operation_unidirectional_timeout() says:

Yes, these are request messages that don't require a response.
The acknowledgement is about when the host *sent it*, not when
it got received.  They're rarely used but I could see them being
used this way.  Still, you might be limited to 255 or so in-flight
messages.

					-Alex

>   * Note that successful send of a unidirectional operation does not imply that
>   * the request as actually reached the remote end of the connection.
>   */
> 
> So long as you are doing your memory management correctly, i don't see
> why you cannot implement double buffering in the transport driver.
> 
> I also don't see why you cannot extend the Greybus upper API and add a
> true gb_operation_unidirectional_async() call.
> 
> You also said that lots of small transfers are inefficient, and you
> wanted to combine small high level messages into one big transport
> layer message. This is something you frequently see with USB Ethernet
> dongles. The Ethernet driver puts a number of small Ethernet packets
> into one USB URB. The USB layer itself has no idea this is going on. I
> don't see why the same cannot be done here, greybus itself does not
> need to be aware of the packet consolidation.
> 
> 	Andrew
Re: [RFC net-next 00/15] Add support for Silicon Labs CPC
Posted by Andrew Lunn 6 months, 4 weeks ago
> > You have listed how your implementation is similar to Greybus. You say
> > what is not so great is streaming, i.e. the bulk data transfer needed
> > to implement xmit_sync() and xmit_async() above. Greybus is too much
> > RPC based. RPCs are actually what you want for most of the operations
> > listed above, but i agree for data, in order to keep the transport
> > fully loaded, you want double buffering. However, that appears to be
> > possible with the current Greybus code.
> > 
> > gb_operation_unidirectional_timeout() says:
> 
> Yes, these are request messages that don't require a response.
> The acknowledgement is about when the host *sent it*, not when
> it got received.  They're rarely used but I could see them being
> used this way.  Still, you might be limited to 255 or so in-flight
> messages.

I don't actually see how you can have multiple messages in-flight, but
maybe i'm missing something. It appears that upper layers pass the
message down and then block on a completion. The signalling of that
completion only happens when the message is on the wire. So it is all
synchronous. In order to have multiple messages in-flight, the lower
layer would have to copy the message, signal the completion, and then
send the copy whenever the transport was free.

The network stack is however async by nature. The ndo_start_xmit call
passes an skbuf. The data in the skbuf is setup for DMA transfer, and
then ndo_start_xmit returns. Later, when the DMA has completed, the
driver calls dev_kfree_skb() to say it has finished with the skb.

Ideally we want a similar async mechanism, which is why i suggested
gb_operation_unidirectional_async(). Pass a message to Greybus, none
blocking, and have a callback for when the message has hit the wire
and the skb can be friend. The low level can then keep a list of skb's
so it can quickly do back to back transfers over the transport to keep
it busy.

	Andrew
Re: [RFC net-next 00/15] Add support for Silicon Labs CPC
Posted by Damien Riégel 7 months ago
On Sun May 18, 2025 at 11:23 AM EDT, Andrew Lunn wrote:
> This also comes back to my point of there being at least four vendors
> of devices like yours. Linux does not want four or more
> implementations of this, each 90% the same, just a different way of
> converting this structure of operations into messages over a transport
> bus.
>
> You have to define the protocol. Mainline needs that so when the next
> vendor comes along, we can point at your protocol and say that is how
> it has to be implemented in Mainline. Make your firmware on the SoC
> understand it.  You have the advantage that you are here first, you
> get to define that protocol, but you do need to clearly define it.

I understand that this is the preferred way and I'll push internally for
going that direction. That being said, Greybus seems to offer the
capability to have a custom driver for a given PID/VID, if a module
doesn't implement a Greybus-standardized protocol. Would a custom
Greybus driver for, just as an example, our Wifi stack be an acceptable
option?

> You have listed how your implementation is similar to Greybus. You say
> what is not so great is streaming, i.e. the bulk data transfer needed
> to implement xmit_sync() and xmit_async() above. Greybus is too much
> RPC based. RPCs are actually what you want for most of the operations
> listed above, but i agree for data, in order to keep the transport
> fully loaded, you want double buffering. However, that appears to be
> possible with the current Greybus code.
>
> gb_operation_unidirectional_timeout() says:
>
>  * Note that successful send of a unidirectional operation does not imply that
>  * the request as actually reached the remote end of the connection.
>  */
>
> So long as you are doing your memory management correctly, i don't see
> why you cannot implement double buffering in the transport driver.
>
> I also don't see why you cannot extend the Greybus upper API and add a
> true gb_operation_unidirectional_async() call.

Just because touching a well established subsystem is scary, but I
understand that we're allowed to make changes that make sense.

> You also said that lots of small transfers are inefficient, and you
> wanted to combine small high level messages into one big transport
> layer message. This is something you frequently see with USB Ethernet
> dongles. The Ethernet driver puts a number of small Ethernet packets
> into one USB URB. The USB layer itself has no idea this is going on. I
> don't see why the same cannot be done here, greybus itself does not
> need to be aware of the packet consolidation.

Yeah, so in this design, CPC would really be limited to the transport
bus (SPI for now), to do packet consolidation and managing RCP available
buffers. I think at this point, the next step is to come up with a proof
of concept of Greybus over CPC and see if that works or not.

Let me add that I sincerely appreciate that you took the time to review
this RFC and provided an upstream-compatible alternative to what we
proposed, so thank you for that.

        Damien
Re: [RFC net-next 00/15] Add support for Silicon Labs CPC
Posted by Andrew Lunn 7 months ago
On Mon, May 19, 2025 at 09:21:52PM -0400, Damien Riégel wrote:
> On Sun May 18, 2025 at 11:23 AM EDT, Andrew Lunn wrote:
> > This also comes back to my point of there being at least four vendors
> > of devices like yours. Linux does not want four or more
> > implementations of this, each 90% the same, just a different way of
> > converting this structure of operations into messages over a transport
> > bus.
> >
> > You have to define the protocol. Mainline needs that so when the next
> > vendor comes along, we can point at your protocol and say that is how
> > it has to be implemented in Mainline. Make your firmware on the SoC
> > understand it.  You have the advantage that you are here first, you
> > get to define that protocol, but you do need to clearly define it.
> 
> I understand that this is the preferred way and I'll push internally for
> going that direction. That being said, Greybus seems to offer the
> capability to have a custom driver for a given PID/VID, if a module
> doesn't implement a Greybus-standardized protocol. Would a custom
> Greybus driver for, just as an example, our Wifi stack be an acceptable
> option?

It is not clear to me why a custom driver would be needed. You need to
implement a Linux WiFi driver. That API is well defined, although you
might only need a subset. What do you need in addition to that?

> > So long as you are doing your memory management correctly, i don't see
> > why you cannot implement double buffering in the transport driver.
> >
> > I also don't see why you cannot extend the Greybus upper API and add a
> > true gb_operation_unidirectional_async() call.
> 
> Just because touching a well established subsystem is scary, but I
> understand that we're allowed to make changes that make sense.

There are developers here to help review such changes. And extending
existing Linux subsystems is how Linux has become the dominant OS. You
are getting it for free, building on the work of others, so it is not
too unreasonable to contribute a little bit back by making it even
better.

> 
> > You also said that lots of small transfers are inefficient, and you
> > wanted to combine small high level messages into one big transport
> > layer message. This is something you frequently see with USB Ethernet
> > dongles. The Ethernet driver puts a number of small Ethernet packets
> > into one USB URB. The USB layer itself has no idea this is going on. I
> > don't see why the same cannot be done here, greybus itself does not
> > need to be aware of the packet consolidation.
> 
> Yeah, so in this design, CPC would really be limited to the transport
> bus (SPI for now), to do packet consolidation and managing RCP available
> buffers. I think at this point, the next step is to come up with a proof
> of concept of Greybus over CPC and see if that works or not.

You need to keep the lower level generic. I would not expect anything
Silabs specific in how you transport Greybus over SPI or SDIO. As part
of gb_operation_unidirectional_async() you need to think about flow
control, you need some generic mechanism to indicate receive buffer
availability in the device, and when to pause a while to let the
device catch up, but there is no reason TI, Microchip, Nordic, etc
should not be able to use the same encapsulation scheme.

	Andrew
Re: [RFC net-next 00/15] Add support for Silicon Labs CPC
Posted by Alex Elder 6 months, 4 weeks ago
On 5/20/25 8:04 AM, Andrew Lunn wrote:
> On Mon, May 19, 2025 at 09:21:52PM -0400, Damien Riégel wrote:
>> On Sun May 18, 2025 at 11:23 AM EDT, Andrew Lunn wrote:
>>> This also comes back to my point of there being at least four vendors
>>> of devices like yours. Linux does not want four or more
>>> implementations of this, each 90% the same, just a different way of
>>> converting this structure of operations into messages over a transport
>>> bus.
>>>
>>> You have to define the protocol. Mainline needs that so when the next
>>> vendor comes along, we can point at your protocol and say that is how
>>> it has to be implemented in Mainline. Make your firmware on the SoC
>>> understand it.  You have the advantage that you are here first, you
>>> get to define that protocol, but you do need to clearly define it.
>>
>> I understand that this is the preferred way and I'll push internally for
>> going that direction. That being said, Greybus seems to offer the
>> capability to have a custom driver for a given PID/VID, if a module
>> doesn't implement a Greybus-standardized protocol. Would a custom
>> Greybus driver for, just as an example, our Wifi stack be an acceptable
>> option?
> 
> It is not clear to me why a custom driver would be needed. You need to
> implement a Linux WiFi driver. That API is well defined, although you
> might only need a subset. What do you need in addition to that?

This "custom driver" is needed for CPC too, right?
You need some way to translate what's happening in
the kernel into directions sent over your transport
to the hardware on the other side.

Don't worry about proposing changes to Greybus.  But
please do it incrementally, and share what you would
like to do, so people can help steer you in the most
promising direction.

					-Alex

>>> So long as you are doing your memory management correctly, i don't see
>>> why you cannot implement double buffering in the transport driver.
>>>
>>> I also don't see why you cannot extend the Greybus upper API and add a
>>> true gb_operation_unidirectional_async() call.
>>
>> Just because touching a well established subsystem is scary, but I
>> understand that we're allowed to make changes that make sense.
> 
> There are developers here to help review such changes. And extending
> existing Linux subsystems is how Linux has become the dominant OS. You
> are getting it for free, building on the work of others, so it is not
> too unreasonable to contribute a little bit back by making it even
> better.
> 
>>
>>> You also said that lots of small transfers are inefficient, and you
>>> wanted to combine small high level messages into one big transport
>>> layer message. This is something you frequently see with USB Ethernet
>>> dongles. The Ethernet driver puts a number of small Ethernet packets
>>> into one USB URB. The USB layer itself has no idea this is going on. I
>>> don't see why the same cannot be done here, greybus itself does not
>>> need to be aware of the packet consolidation.
>>
>> Yeah, so in this design, CPC would really be limited to the transport
>> bus (SPI for now), to do packet consolidation and managing RCP available
>> buffers. I think at this point, the next step is to come up with a proof
>> of concept of Greybus over CPC and see if that works or not.
> 
> You need to keep the lower level generic. I would not expect anything
> Silabs specific in how you transport Greybus over SPI or SDIO. As part
> of gb_operation_unidirectional_async() you need to think about flow
> control, you need some generic mechanism to indicate receive buffer
> availability in the device, and when to pause a while to let the
> device catch up, but there is no reason TI, Microchip, Nordic, etc
> should not be able to use the same encapsulation scheme.
> 
> 	Andrew