[PATCHv3 net 1/3] bonding: set AD_RX_PORT_DISABLED when disabling a port

Hangbin Liu posted 3 patches 1 month, 1 week ago
[PATCHv3 net 1/3] bonding: set AD_RX_PORT_DISABLED when disabling a port
Posted by Hangbin Liu 1 month, 1 week ago
When disabling a port’s collecting and distributing states, updating only
rx_disabled is not sufficient. We also need to set AD_RX_PORT_DISABLED
so that the rx_machine transitions into the AD_RX_EXPIRED state.

One example is in ad_agg_selection_logic(): when a new aggregator is
selected and old active aggregator is disabled, if AD_RX_PORT_DISABLED is
not set, the disabled port may remain stuck in AD_RX_CURRENT due to
continuing to receive partner LACP messages.

The __disable_port() called by ad_disable_collecting_distributing()
does not have this issue, since its caller also clears the
collecting/distributing bits.

The __disable_port() called by bond_3ad_bind_slave() should also be fine,
as the RX state machine is re-initialized to AD_RX_INITIALIZE.

Let's fix this only in ad_agg_selection_logic() to reduce the chances of
unintended side effects.

Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2")
Signed-off-by: Hangbin Liu <liuhangbin@gmail.com>
---
 drivers/net/bonding/bond_3ad.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/net/bonding/bond_3ad.c b/drivers/net/bonding/bond_3ad.c
index af7f74cfdc08..c47f6a69fd2a 100644
--- a/drivers/net/bonding/bond_3ad.c
+++ b/drivers/net/bonding/bond_3ad.c
@@ -1932,6 +1932,7 @@ static void ad_agg_selection_logic(struct aggregator *agg,
 		if (active) {
 			for (port = active->lag_ports; port;
 			     port = port->next_port_in_aggregator) {
+				port->sm_rx_state = AD_RX_PORT_DISABLED;
 				__disable_port(port);
 			}
 		}
-- 
2.50.1

Re: [PATCHv3 net 1/3] bonding: set AD_RX_PORT_DISABLED when disabling a port
Posted by Jay Vosburgh 1 month, 1 week ago
Hangbin Liu <liuhangbin@gmail.com> wrote:

>When disabling a port’s collecting and distributing states, updating only
>rx_disabled is not sufficient. We also need to set AD_RX_PORT_DISABLED
>so that the rx_machine transitions into the AD_RX_EXPIRED state.
>
>One example is in ad_agg_selection_logic(): when a new aggregator is
>selected and old active aggregator is disabled, if AD_RX_PORT_DISABLED is
>not set, the disabled port may remain stuck in AD_RX_CURRENT due to
>continuing to receive partner LACP messages.

	I'm not sure I'm seeing the problem here, is there an actual
misbehavior being fixed here?  The port is receiving LACPDUs, and from
the receive state machine point of view (Figure 6-18) there's no issue.
The "port_enabled" variable (6.4.7) also informs the state machine
behavior, but that's not the same as what's changed by bonding's
__disable_port function.

	Where I'm going with this is that, when multiple aggregator
support was originally implemented, the theory was to keep aggregators
other than the active agg in a state such that they could be put into
service immediately, without having to do LACPDU exchanges in order to
transition into the appropriate state.  A hot standby, basically,
analogous to an active-backup mode backup interface with link state up.

	I haven't tested this in some time, though, so my question is
whether this change affects the failover time when an active aggregator
is de-selected in favor of another aggregator.  By "failover time," I
mean how long transmission and/or reception are interrupted when
changing from one aggregator to another.  I presume that if aggregator
failover ater this change requires LACPDU exchanges, etc, it will take
longer to fail over.

	-J


>The __disable_port() called by ad_disable_collecting_distributing()
>does not have this issue, since its caller also clears the
>collecting/distributing bits.
>
>The __disable_port() called by bond_3ad_bind_slave() should also be fine,
>as the RX state machine is re-initialized to AD_RX_INITIALIZE.
>
>Let's fix this only in ad_agg_selection_logic() to reduce the chances of
>unintended side effects.
>
>Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2")
>Signed-off-by: Hangbin Liu <liuhangbin@gmail.com>
>---
> drivers/net/bonding/bond_3ad.c | 1 +
> 1 file changed, 1 insertion(+)
>
>diff --git a/drivers/net/bonding/bond_3ad.c b/drivers/net/bonding/bond_3ad.c
>index af7f74cfdc08..c47f6a69fd2a 100644
>--- a/drivers/net/bonding/bond_3ad.c
>+++ b/drivers/net/bonding/bond_3ad.c
>@@ -1932,6 +1932,7 @@ static void ad_agg_selection_logic(struct aggregator *agg,
> 		if (active) {
> 			for (port = active->lag_ports; port;
> 			     port = port->next_port_in_aggregator) {
>+				port->sm_rx_state = AD_RX_PORT_DISABLED;
> 				__disable_port(port);
> 			}
> 		}
>-- 
>2.50.1
>

---
	-Jay Vosburgh, jv@jvosburgh.net
Re: [PATCHv3 net 1/3] bonding: set AD_RX_PORT_DISABLED when disabling a port
Posted by Hangbin Liu 1 month, 1 week ago
On Thu, Feb 26, 2026 at 05:16:55PM -0800, Jay Vosburgh wrote:
> Hangbin Liu <liuhangbin@gmail.com> wrote:
> 
> >When disabling a port’s collecting and distributing states, updating only
> >rx_disabled is not sufficient. We also need to set AD_RX_PORT_DISABLED
> >so that the rx_machine transitions into the AD_RX_EXPIRED state.
> >
> >One example is in ad_agg_selection_logic(): when a new aggregator is
> >selected and old active aggregator is disabled, if AD_RX_PORT_DISABLED is
> >not set, the disabled port may remain stuck in AD_RX_CURRENT due to
> >continuing to receive partner LACP messages.
> 
> 	I'm not sure I'm seeing the problem here, is there an actual
> misbehavior being fixed here?  The port is receiving LACPDUs, and from
> the receive state machine point of view (Figure 6-18) there's no issue.
> The "port_enabled" variable (6.4.7) also informs the state machine
> behavior, but that's not the same as what's changed by bonding's
> __disable_port function.

Yes, the reason I do it here is we select another aggregator and called
__disable_port() for the old one. If we don't update sm_rx_state, the port
will be keep in collecting/distributing state, and the partner will also
keep in the c/d state.

Here we entered a logical paradox, on one hand we want to disable the port,
on the other hand we keep the port in collecting/distributing state.

> 
> 	Where I'm going with this is that, when multiple aggregator
> support was originally implemented, the theory was to keep aggregators
> other than the active agg in a state such that they could be put into
> service immediately, without having to do LACPDU exchanges in order to
> transition into the appropriate state.  A hot standby, basically,
> analogous to an active-backup mode backup interface with link state up.

This sounds good. But without LACPDU exchange, the hot standby actor and
partner should be in collecting/distributing state. What should we do when
partner start send packets to us?
> 
> 	I haven't tested this in some time, though, so my question is
> whether this change affects the failover time when an active aggregator
> is de-selected in favor of another aggregator.  By "failover time," I
> mean how long transmission and/or reception are interrupted when
> changing from one aggregator to another.  I presume that if aggregator
> failover ater this change requires LACPDU exchanges, etc, it will take
> longer to fail over.

I haven't tested it yet. I think the failover time should be in 1 second.
Let me do some testing today.

Thanks
Hangbin
Re: [PATCHv3 net 1/3] bonding: set AD_RX_PORT_DISABLED when disabling a port
Posted by Jay Vosburgh 1 month, 1 week ago
Hangbin Liu <liuhangbin@gmail.com> wrote:

>On Thu, Feb 26, 2026 at 05:16:55PM -0800, Jay Vosburgh wrote:
>> Hangbin Liu <liuhangbin@gmail.com> wrote:
>> 
>> >When disabling a port’s collecting and distributing states, updating only
>> >rx_disabled is not sufficient. We also need to set AD_RX_PORT_DISABLED
>> >so that the rx_machine transitions into the AD_RX_EXPIRED state.
>> >
>> >One example is in ad_agg_selection_logic(): when a new aggregator is
>> >selected and old active aggregator is disabled, if AD_RX_PORT_DISABLED is
>> >not set, the disabled port may remain stuck in AD_RX_CURRENT due to
>> >continuing to receive partner LACP messages.
>> 
>> 	I'm not sure I'm seeing the problem here, is there an actual
>> misbehavior being fixed here?  The port is receiving LACPDUs, and from
>> the receive state machine point of view (Figure 6-18) there's no issue.
>> The "port_enabled" variable (6.4.7) also informs the state machine
>> behavior, but that's not the same as what's changed by bonding's
>> __disable_port function.
>
>Yes, the reason I do it here is we select another aggregator and called
>__disable_port() for the old one. If we don't update sm_rx_state, the port
>will be keep in collecting/distributing state, and the partner will also
>keep in the c/d state.
>
>Here we entered a logical paradox, on one hand we want to disable the port,
>on the other hand we keep the port in collecting/distributing state.

	"disable" the port here really means from bonding's perspective,
so, generally equivalent to the backup interface of an active-backup
mode bond.

	Such a backup interface is typically carrier up and able to send
or receive packets.  The peer generally won't send packets to the backup
interface, however, as no traffic is sent from the backup, and the MAC
for the bond uses a different interface, so no forwarding entries will
direct to the backup interface.

	There are a couple of special cases, like LLDP, that are handled
as an exception, but in general, if a peer does send packets to the
backup interface (due to a switch flood, for example), they're dropped.

>> 	Where I'm going with this is that, when multiple aggregator
>> support was originally implemented, the theory was to keep aggregators
>> other than the active agg in a state such that they could be put into
>> service immediately, without having to do LACPDU exchanges in order to
>> transition into the appropriate state.  A hot standby, basically,
>> analogous to an active-backup mode backup interface with link state up.
>
>This sounds good. But without LACPDU exchange, the hot standby actor and
>partner should be in collecting/distributing state. What should we do when
>partner start send packets to us?

	Did you mean "should not be in c/d state" above?  I.e., without
LACPDU exchage, ... not in c/d state?

	Regardless, as above, the situation is generally equivalent to a
backup interface in active-backup mode: incoming traffic that isn't a
special case is dropped.  Normal traffic (bearing the bond source MAC)
isn't sent, as that would update the peer's forwarding table.

	Nothing in the standard prohibits us from having multiple
aggregators in c/d state simultaneously.  A configuration with two
separate bonds, each with interfaces successfully aggregated together
with their respective peers, wherein those two bonds are placed into a
third bond in active-backup mode is essentially the same thing as what
we're discussing.

	-J

>> 	I haven't tested this in some time, though, so my question is
>> whether this change affects the failover time when an active aggregator
>> is de-selected in favor of another aggregator.  By "failover time," I
>> mean how long transmission and/or reception are interrupted when
>> changing from one aggregator to another.  I presume that if aggregator
>> failover ater this change requires LACPDU exchanges, etc, it will take
>> longer to fail over.
>
>I haven't tested it yet. I think the failover time should be in 1 second.
>Let me do some testing today.
>
>Thanks
>Hangbin

---
	-Jay Vosburgh, jv@jvosburgh.net
Re: [PATCHv3 net 1/3] bonding: set AD_RX_PORT_DISABLED when disabling a port
Posted by Hangbin Liu 1 month, 1 week ago
On Thu, Feb 26, 2026 at 08:42:27PM -0800, Jay Vosburgh wrote:
> >> 	I'm not sure I'm seeing the problem here, is there an actual
> >> misbehavior being fixed here?  The port is receiving LACPDUs, and from
> >> the receive state machine point of view (Figure 6-18) there's no issue.
> >> The "port_enabled" variable (6.4.7) also informs the state machine
> >> behavior, but that's not the same as what's changed by bonding's
> >> __disable_port function.
> >
> >Yes, the reason I do it here is we select another aggregator and called
> >__disable_port() for the old one. If we don't update sm_rx_state, the port
> >will be keep in collecting/distributing state, and the partner will also
> >keep in the c/d state.
> >
> >Here we entered a logical paradox, on one hand we want to disable the port,
> >on the other hand we keep the port in collecting/distributing state.
> 
> 	"disable" the port here really means from bonding's perspective,
> so, generally equivalent to the backup interface of an active-backup
> mode bond.

Oh, got it.

> 
> 	Such a backup interface is typically carrier up and able to send
> or receive packets.  The peer generally won't send packets to the backup
> interface, however, as no traffic is sent from the backup, and the MAC
> for the bond uses a different interface, so no forwarding entries will
> direct to the backup interface.
> 
> 	There are a couple of special cases, like LLDP, that are handled
> as an exception, but in general, if a peer does send packets to the
> backup interface (due to a switch flood, for example), they're dropped.

OK, this makes sense to me.

> 
> >> 	Where I'm going with this is that, when multiple aggregator
> >> support was originally implemented, the theory was to keep aggregators
> >> other than the active agg in a state such that they could be put into
> >> service immediately, without having to do LACPDU exchanges in order to
> >> transition into the appropriate state.  A hot standby, basically,
> >> analogous to an active-backup mode backup interface with link state up.
> >
> >This sounds good. But without LACPDU exchange, the hot standby actor and
                         ^^ I mean with LACPDU exchange..
> >partner should be in collecting/distributing state. What should we do when
> >partner start send packets to us?
> 
> 	Did you mean "should not be in c/d state" above?  I.e., without
> LACPDU exchage, ... not in c/d state?
> 
> 	Regardless, as above, the situation is generally equivalent to a
> backup interface in active-backup mode: incoming traffic that isn't a
> special case is dropped.  Normal traffic (bearing the bond source MAC)
> isn't sent, as that would update the peer's forwarding table.
> 
> 	Nothing in the standard prohibits us from having multiple
> aggregators in c/d state simultaneously.  A configuration with two
> separate bonds, each with interfaces successfully aggregated together
> with their respective peers, wherein those two bonds are placed into a
> third bond in active-backup mode is essentially the same thing as what
> we're discussing.

In theory this looks good. But in fact, when we do failover and set the
previous active port to disabled via
  - __disable_port(port)
    - slave->rx_disabled = 1

This will stop the failover port back to c/d state. For example, in my
testing (see details in patch 03), we have 4 ports, eth0, eth1, eth2, eth3.
eth0 and eth1 are agg1, eth2 and eth3 are agg2. If we do failover on eth1,
when eth1 come up, the final state will be:

3: eth0@if3: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc noqueue master bond0 state UP mode DEFAULT group default qlen 1000
    bond_slave state BACKUP ad_aggregator_id 1 ad_actor_oper_port_state_str <active,short_timeout,aggregating,in_sync,collecting,distributing> ad_partner_oper_port_state_str <active,short_timeout,aggregating,in_sync,collecting,distributing> actor_port_prio 10

4: eth1@if4: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc noqueue master bond0 state UP mode DEFAULT group default qlen 1000
    bond_slave state BACKUP ad_aggregator_id 1 ad_actor_oper_port_state_str <active,short_timeout,aggregating> ad_partner_oper_port_state_str <active,short_timeout,aggregating,in_sync> actor_port_prio 255

5: eth2@if3: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc noqueue master bond0 state UP mode DEFAULT group default qlen 1000
    bond_slave state ACTIVE ad_aggregator_id 2 ad_actor_oper_port_state_str <active,short_timeout,aggregating,in_sync,collecting,distributing> ad_partner_oper_port_state_str <active,short_timeout,aggregating,in_sync,collecting,distributing> actor_port_prio 1000

6: eth3@if4: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc noqueue master bond0 state UP mode DEFAULT group default qlen 1000
    bond_slave state ACTIVE ad_aggregator_id 2 ad_actor_oper_port_state_str <active,short_timeout,aggregating,in_sync,collecting,distributing> ad_partner_oper_port_state_str <active,short_timeout,aggregating,in_sync,collecting,distributing> actor_port_prio 255

7: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    bond mode 802.3ad actor_port_prio ad_aggregator 2

So you can see the eth0 state is c/d, while eth1 state is active, aggregating.
Do you think it's a correct state?

Thanks
Hangbin
Re: [PATCHv3 net 1/3] bonding: set AD_RX_PORT_DISABLED when disabling a port
Posted by Hangbin Liu 6 days, 13 hours ago
On Fri, Feb 27, 2026 at 06:21:20AM +0000, Hangbin Liu wrote:
> > 	Regardless, as above, the situation is generally equivalent to a
> > backup interface in active-backup mode: incoming traffic that isn't a
> > special case is dropped.  Normal traffic (bearing the bond source MAC)
> > isn't sent, as that would update the peer's forwarding table.
> > 
> > 	Nothing in the standard prohibits us from having multiple
> > aggregators in c/d state simultaneously.  A configuration with two
> > separate bonds, each with interfaces successfully aggregated together
> > with their respective peers, wherein those two bonds are placed into a
> > third bond in active-backup mode is essentially the same thing as what
> > we're discussing.
> 
> In theory this looks good. But in fact, when we do failover and set the
> previous active port to disabled via
>   - __disable_port(port)
>     - slave->rx_disabled = 1
> 
> This will stop the failover port back to c/d state. For example, in my
> testing (see details in patch 03), we have 4 ports, eth0, eth1, eth2, eth3.
> eth0 and eth1 are agg1, eth2 and eth3 are agg2. If we do failover on eth1,
> when eth1 come up, the final state will be:
> 
> 3: eth0@if3: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc noqueue master bond0 state UP mode DEFAULT group default qlen 1000
>     bond_slave state BACKUP ad_aggregator_id 1 ad_actor_oper_port_state_str <active,short_timeout,aggregating,in_sync,collecting,distributing> ad_partner_oper_port_state_str <active,short_timeout,aggregating,in_sync,collecting,distributing> actor_port_prio 10
> 
> 4: eth1@if4: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc noqueue master bond0 state UP mode DEFAULT group default qlen 1000
>     bond_slave state BACKUP ad_aggregator_id 1 ad_actor_oper_port_state_str <active,short_timeout,aggregating> ad_partner_oper_port_state_str <active,short_timeout,aggregating,in_sync> actor_port_prio 255
> 
> 5: eth2@if3: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc noqueue master bond0 state UP mode DEFAULT group default qlen 1000
>     bond_slave state ACTIVE ad_aggregator_id 2 ad_actor_oper_port_state_str <active,short_timeout,aggregating,in_sync,collecting,distributing> ad_partner_oper_port_state_str <active,short_timeout,aggregating,in_sync,collecting,distributing> actor_port_prio 1000
> 
> 6: eth3@if4: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc noqueue master bond0 state UP mode DEFAULT group default qlen 1000
>     bond_slave state ACTIVE ad_aggregator_id 2 ad_actor_oper_port_state_str <active,short_timeout,aggregating,in_sync,collecting,distributing> ad_partner_oper_port_state_str <active,short_timeout,aggregating,in_sync,collecting,distributing> actor_port_prio 255
> 
> 7: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
>     bond mode 802.3ad actor_port_prio ad_aggregator 2
> 
> So you can see the eth0 state is c/d, while eth1 state is active, aggregating.
> Do you think it's a correct state?

Hi Jay,

Do you have any comments for this issue?

Thanks
Hangbin
Re: [PATCHv3 net 1/3] bonding: set AD_RX_PORT_DISABLED when disabling a port
Posted by Hangbin Liu 4 weeks ago
Hi Jay,

Any comments for this?

Thanks
Hangbin
On Fri, Feb 27, 2026 at 06:21:20AM +0000, Hangbin Liu wrote:
> > 	Such a backup interface is typically carrier up and able to send
> > or receive packets.  The peer generally won't send packets to the backup
> > interface, however, as no traffic is sent from the backup, and the MAC
> > for the bond uses a different interface, so no forwarding entries will
> > direct to the backup interface.
> > 
> > 	There are a couple of special cases, like LLDP, that are handled
> > as an exception, but in general, if a peer does send packets to the
> > backup interface (due to a switch flood, for example), they're dropped.
> 
> OK, this makes sense to me.
> 
> > 
> > >> 	Where I'm going with this is that, when multiple aggregator
> > >> support was originally implemented, the theory was to keep aggregators
> > >> other than the active agg in a state such that they could be put into
> > >> service immediately, without having to do LACPDU exchanges in order to
> > >> transition into the appropriate state.  A hot standby, basically,
> > >> analogous to an active-backup mode backup interface with link state up.
> > >
> > >This sounds good. But without LACPDU exchange, the hot standby actor and
>                          ^^ I mean with LACPDU exchange..
> > >partner should be in collecting/distributing state. What should we do when
> > >partner start send packets to us?
> > 
> > 	Did you mean "should not be in c/d state" above?  I.e., without
> > LACPDU exchage, ... not in c/d state?
> > 
> > 	Regardless, as above, the situation is generally equivalent to a
> > backup interface in active-backup mode: incoming traffic that isn't a
> > special case is dropped.  Normal traffic (bearing the bond source MAC)
> > isn't sent, as that would update the peer's forwarding table.
> > 
> > 	Nothing in the standard prohibits us from having multiple
> > aggregators in c/d state simultaneously.  A configuration with two
> > separate bonds, each with interfaces successfully aggregated together
> > with their respective peers, wherein those two bonds are placed into a
> > third bond in active-backup mode is essentially the same thing as what
> > we're discussing.
> 
> In theory this looks good. But in fact, when we do failover and set the
> previous active port to disabled via
>   - __disable_port(port)
>     - slave->rx_disabled = 1
> 
> This will stop the failover port back to c/d state. For example, in my
> testing (see details in patch 03), we have 4 ports, eth0, eth1, eth2, eth3.
> eth0 and eth1 are agg1, eth2 and eth3 are agg2. If we do failover on eth1,
> when eth1 come up, the final state will be:
> 
> 3: eth0@if3: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc noqueue master bond0 state UP mode DEFAULT group default qlen 1000
>     bond_slave state BACKUP ad_aggregator_id 1 ad_actor_oper_port_state_str <active,short_timeout,aggregating,in_sync,collecting,distributing> ad_partner_oper_port_state_str <active,short_timeout,aggregating,in_sync,collecting,distributing> actor_port_prio 10
> 
> 4: eth1@if4: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc noqueue master bond0 state UP mode DEFAULT group default qlen 1000
>     bond_slave state BACKUP ad_aggregator_id 1 ad_actor_oper_port_state_str <active,short_timeout,aggregating> ad_partner_oper_port_state_str <active,short_timeout,aggregating,in_sync> actor_port_prio 255
> 
> 5: eth2@if3: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc noqueue master bond0 state UP mode DEFAULT group default qlen 1000
>     bond_slave state ACTIVE ad_aggregator_id 2 ad_actor_oper_port_state_str <active,short_timeout,aggregating,in_sync,collecting,distributing> ad_partner_oper_port_state_str <active,short_timeout,aggregating,in_sync,collecting,distributing> actor_port_prio 1000
> 
> 6: eth3@if4: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc noqueue master bond0 state UP mode DEFAULT group default qlen 1000
>     bond_slave state ACTIVE ad_aggregator_id 2 ad_actor_oper_port_state_str <active,short_timeout,aggregating,in_sync,collecting,distributing> ad_partner_oper_port_state_str <active,short_timeout,aggregating,in_sync,collecting,distributing> actor_port_prio 255
> 
> 7: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
>     bond mode 802.3ad actor_port_prio ad_aggregator 2
> 
> So you can see the eth0 state is c/d, while eth1 state is active, aggregating.
> Do you think it's a correct state?
> 
> Thanks
> Hangbin
Re: [PATCHv3 net 1/3] bonding: set AD_RX_PORT_DISABLED when disabling a port
Posted by Hangbin Liu 1 month, 1 week ago
On Fri, Feb 27, 2026 at 02:31:05AM +0000, Hangbin Liu wrote:
> > 	I haven't tested this in some time, though, so my question is
> > whether this change affects the failover time when an active aggregator
> > is de-selected in favor of another aggregator.  By "failover time," I
> > mean how long transmission and/or reception are interrupted when
> > changing from one aggregator to another.  I presume that if aggregator
> > failover ater this change requires LACPDU exchanges, etc, it will take
> > longer to fail over.
> 
> I haven't tested it yet. I think the failover time should be in 1 second.
> Let me do some testing today.

I did a test and the failover takes about 200ms with the environment in patch 03.

Here is the full log

Code: the timer count starts after the old active port link down.
```
ip -n "${c_ns}" link set eth1 down
date +'%F %T.%3N'
ip -n ${c_ns} -d link show eth2
while ! ip -n ${c_ns} -d link show eth2 | grep -q distributing; do
        sleep 0.01
done
date +'%F %T.%3N'
ip -n ${c_ns} -d link show eth2
```

Log:
2026-02-26 22:59:54.334   <-- The time when eth1 link down

5: eth2@if3: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc noqueue master bond0 state UP mode DEFAULT group default qlen 1000
    link/ether 12:40:54:81:d3:80 brd ff:ff:ff:ff:ff:ff link-netns b_ns-PKIXVg promiscuity 0 allmulti 0 minmtu 68 maxmtu 65535
    veth
    bond_slave state BACKUP mii_status UP link_failure_count 0 perm_hwaddr 26:10:46:58:22:e4 queue_id 0 prio 0 ad_aggregator_id 2 ad_actor_oper_port_state 7 ad_actor_oper_port_state_str <active,short_timeout,aggregating> ad_partner_oper_port_state 15 ad_partner_oper_port_state_str <active,short_timeout,aggregating,in_sync> actor_port_prio 1000 addrgenmode eui64 numtxqueues 4 numrxqueues 4 gso_max_size 65536 gso_max_segs 65535 tso_max_size 524280 tso_max_segs 65535 gro_max_size 65536 gso_ipv4_max_size 65536 gro_ipv4_max_size 65536

2026-02-26 22:59:54.529   <--- The time when eth2 enter collecting,distributing state

5: eth2@if3: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc noqueue master bond0 state UP mode DEFAULT group default qlen 1000
    link/ether 12:40:54:81:d3:80 brd ff:ff:ff:ff:ff:ff link-netns b_ns-PKIXVg promiscuity 0 allmulti 0 minmtu 68 maxmtu 65535
    veth
    bond_slave state ACTIVE mii_status UP link_failure_count 0 perm_hwaddr 26:10:46:58:22:e4 queue_id 0 prio 0 ad_aggregator_id 2 ad_actor_oper_port_state 63 ad_actor_oper_port_state_str <active,short_timeout,aggregating,in_sync,collecting,distributing> ad_partner_oper_port_state 63 ad_partner_oper_port_state_str <active,short_timeout,aggregating,in_sync,collecting,distributing> actor_port_prio 1000 addrgenmode eui64 numtxqueues 4 numrxqueues 4 gso_max_size 65536 gso_max_segs 65535 tso_max_size 524280 tso_max_segs 65535 gro_max_size 65536 gso_ipv4_max_size 65536 gro_ipv4_max_size 65536

Thanks
Hangbin