[PATCH v2] ibmvnic: fix OOB array access in ibmvnic_xmit on queue count reduction

Tyllis Xu posted 1 patch 2 hours ago
drivers/net/ethernet/ibm/ibmvnic.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
[PATCH v2] ibmvnic: fix OOB array access in ibmvnic_xmit on queue count reduction
Posted by Tyllis Xu 2 hours ago
When the number of TX queues is reduced (e.g., via ethtool -L), the
Qdisc layer retains previously enqueued skbs with queue mappings from
before the reduction. After the reset completes and tx_queues_active is
set to true, netif_tx_start_all_queues() drains these stale skbs through
ibmvnic_xmit(). The queue index from skb_get_queue_mapping() may exceed
the newly allocated array bounds, causing out-of-bounds reads on
tx_scrq[] and tx_pool[]/tso_pool[].

The existing tx_queues_active guard does not help here: it is set to
true by __ibmvnic_open() before netif_tx_start_all_queues() restarts
queue draining, so stale skbs pass the check with an invalid queue index.

Fold a bounds check against num_active_tx_scrqs into the tx_queues_active
guard, reusing the same drop-packet handling. Since tx_stats_buffers[] is
allocated for IBMVNIC_MAX_QUEUES entries (not just num_active_tx_scrqs),
all drop paths can safely fall through to the out: label's stats update.

Also move rcu_read_unlock() to after the per-queue stats updates, as the
RCU critical section is already large and releasing it a few instructions
earlier provides no practical benefit.

Fixes: 4219196d1f66 ("ibmvnic: fix race between xmit and reset")
Reported-by: Yuhao Jiang <danisjiang@gmail.com>
Cc: stable@vger.kernel.org
Signed-off-by: Tyllis Xu <LivelyCarpet87@gmail.com>
---
v2: Fold the bounds check into the existing !tx_queues_active guard rather
    than adding a separate if block with unlikely(), reusing the same
    drop-packet handling (dev_kfree_skb_any + tx_send_failed/tx_dropped
    increments + goto out). Remove the dedicated out_unlock: label;
    tx_stats_buffers[] is allocated for IBMVNIC_MAX_QUEUES entries so all
    drop paths can safely fall through to the out: stats update. Move
    rcu_read_unlock() to after the stats updates per maintainer suggestion.
    (Rick Lindsley)

 drivers/net/ethernet/ibm/ibmvnic.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
index 5a510eed335e..d5c611c3d9ec 100644
--- a/drivers/net/ethernet/ibm/ibmvnic.c
+++ b/drivers/net/ethernet/ibm/ibmvnic.c
@@ -2444,14 +2444,15 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev)
 	 * rcu to ensure reset waits for us to complete.
 	 */
 	rcu_read_lock();
-	if (!adapter->tx_queues_active) {
+	if (!adapter->tx_queues_active ||
+	    queue_num >= adapter->num_active_tx_scrqs) {
 		dev_kfree_skb_any(skb);

 		tx_send_failed++;
 		tx_dropped++;
 		ret = NETDEV_TX_OK;
 		goto out;
 	}

 	tx_scrq = adapter->tx_scrq[queue_num];
 	txq = netdev_get_tx_queue(netdev, queue_num);
@@ -2663,14 +2664,13 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev)
 		netif_tx_stop_all_queues(netdev);
 		netif_carrier_off(netdev);
 	}
 out:
-	rcu_read_unlock();
 	adapter->tx_send_failed += tx_send_failed;
 	adapter->tx_map_failed += tx_map_failed;
 	adapter->tx_stats_buffers[queue_num].batched_packets += tx_bpackets;
 	adapter->tx_stats_buffers[queue_num].direct_packets += tx_dpackets;
 	adapter->tx_stats_buffers[queue_num].bytes += tx_bytes;
 	adapter->tx_stats_buffers[queue_num].dropped_packets += tx_dropped;
-
+	rcu_read_unlock();
 	return ret;
 }

--
2.43.0