From nobody Sat Feb 7 18:20:32 2026 Received: from mail-pg1-f201.google.com (mail-pg1-f201.google.com [209.85.215.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3D57F212F98 for ; Sat, 7 Feb 2026 01:17:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770427067; cv=none; b=EMwLAgo0vKQDJqxUnfyYapzhJ+oGLTYfVuQDwS+k2YY4xVqWjS41xD4xzgESu0EgShxlAOuO4KU/t6BxldeiwQ3jNm5sgt5kc63+EqGM8jTEA5Mb43ksjuBVtz7ga5drbaAMjFnOVKv1wQsFjPLeDXMV3VU5NKHzM5XBvZdogBE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770427067; c=relaxed/simple; bh=ZdnKo3TDnUE7+e4LT7ZRuWiR+Si+H4SxQG5VDGPl698=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Q8jsyKXuBPP4CYj8RroG/+tDAvrFHZjidgRWv4Ufs+mpGZ9UQCZkCh5MvQFIrAiB73Jtui2YG6l19xeYOoD5ADTTRA/7LZtEIj5IjfuvDnqdW3j3dL2pJ7+phiok5vFrQ9JZUA5245w/w24/Yql0EQQAxlQuAwJbGTmWfvXn88o= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--joshwash.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=btSLyeeg; arc=none smtp.client-ip=209.85.215.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--joshwash.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="btSLyeeg" Received: by mail-pg1-f201.google.com with SMTP id 41be03b00d2f7-c67e92aad79so1591949a12.0 for ; Fri, 06 Feb 2026 17:17:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1770427066; x=1771031866; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=mSgqvo1idulCNHTrl0X5VRnKePTqlJ79hIF3AfnZQNc=; b=btSLyeegzQEklEnbSXDXFgpwvRvRRbMEOoTnutN5zQeelbNo2dHXWuOZeFs6WrA3aw +1g/0TVHsFF3f35QUbwCT5Zc2ng0RvIuITASpn+dZS6Rjx8Pcp2ptbS1pYsx+PHHxqdf h8xMzu7YEQf5N++2SvzWmEByi8eVi1H5+0MFwcqKWefhZF3bJFzdyLDNPDjPUqoBnaoT vLlJoXZQQ5SLqGK901HCrZEQqxwsvuIxR8mduHy8vXao0OKb+RGwhYey8eOPJxX9lEV0 1nwdYcRRG48n+3PHVKwvFVVbGVPGvYvRe1MImjgrngkpez78FYQVpIUT/B1lOVyrtAsr 6OlA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1770427066; x=1771031866; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=mSgqvo1idulCNHTrl0X5VRnKePTqlJ79hIF3AfnZQNc=; b=BA+5f5be83YLomQh/gJ2dvflF9m8ebiKSOfCudifTgxxIoFFdUR9v/BcQ4M10CpTZZ mv831wx+c6DyTmQOKjvhsel6oRzlbCop3bJUwpoKRA7vac11+lH0qwZgIjKafznizkJ0 /7jI0EQkAzLWyfqZ2+3YJ9WW9WOBUWzRwu1gtFtVHakO3r11/gib8A3JtQ18vte1iUxh 8sVH5Z5DqtLxNbcFaS7kpuJoHX2DDahsX0wTDZsddWGZVPXUNsHa1neMAHuFJG3AutJY uPz9r7urjHlpOBTLn8eUbSdlCNgM+/E9BKFADNNEI+j5p/1pALUWdeE2UnaEySsbDkSP zoyQ== X-Forwarded-Encrypted: i=1; AJvYcCVkEQyKRP6K1yPv+wZUmh761M00PgfB8RCYdPH/FuBF24tqJiaNYJLL3Vwrl43lqomm5waETxGYMIoykQg=@vger.kernel.org X-Gm-Message-State: AOJu0YxRrDoXJaA85drP0DKYHOfER5RGt48pFvVt9bdjAc/GWK3EomtO mYaKX16gqXrCWcVxgw6S7zPfgd5j575gEIB+dr5Kls4J2dxppmjIoss3bwZpZMRemPs7bNg5NFA KQFuS4ETuNJ1etQ== X-Received: from pgbcz10.prod.google.com ([2002:a05:6a02:230a:b0:c6d:cb9f:2f95]) (user=joshwash job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a20:7353:b0:392:e5eb:f03 with SMTP id adf61e73a8af0-393ad39a7aamr4320448637.69.1770427066443; Fri, 06 Feb 2026 17:17:46 -0800 (PST) Date: Fri, 6 Feb 2026 17:17:32 -0800 In-Reply-To: <20260207011734.437205-1-joshwash@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260207011734.437205-1-joshwash@google.com> X-Mailer: git-send-email 2.53.0.rc2.204.g2597b5adb4-goog Message-ID: <20260207011734.437205-2-joshwash@google.com> Subject: [PATCH net-next 1/2] gve: Update QPL page registration logic From: Joshua Washington To: netdev@vger.kernel.org Cc: Joshua Washington , Harshitha Ramamurthy , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Willem de Bruijn , Praveen Kaligineedi , Ziwei Xiao , John Fraker , Matt Olson , Bailey Forrest , Tim Hostetler , Jordan Rhee , linux-kernel@vger.kernel.org, Max Yuan Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Matt Olson For DQO, change QPL page registration logic to be more flexible to honor the "max_registered_pages" parameter from the gVNIC device. Previously the number of RX pages per QPL was hardcoded to twice the ring size, and the number of TX pages per QPL was dictated by the device in the DQO-QPL device option. Now [in DQO-QPL mode], the driver will ignore the "tx_pages_per_qpl" parameter indicated in the DQO-QPL device option and instead allocate up to (tx_queue_length / 2) pages per TX QPL and up to (rx_queue_length * 2) pages per RX QPL while keeping the total number of pages under the "max_registered_pages". Merge DQO and GQI QPL page calculation logic into a unified gve_update_num_qpl_pages function. Add rx_pages_per_qpl to the priv struct for consumption by both DQO and GQI. Signed-off-by: Matt Olson Signed-off-by: Max Yuan Reviewed-by: Jordan Rhee Reviewed-by: Harshitha Ramamurthy Reviewed-by: Willem de Bruijn Reviewed-by: Praveen Kaligineedi Signed-off-by: Joshua Washington --- drivers/net/ethernet/google/gve/gve.h | 18 ++++++-------- drivers/net/ethernet/google/gve/gve_adminq.c | 8 ------- drivers/net/ethernet/google/gve/gve_buffer_mgmt_dqo.c | 2 +- drivers/net/ethernet/google/gve/gve_main.c | 40 +++++++++++++++= +++++++++++++++++ drivers/net/ethernet/google/gve/gve_rx.c | 5 +--- drivers/net/ethernet/google/gve/gve_rx_dqo.c | 6 ++--- drivers/net/ethernet/google/gve/gve_tx.c | 5 +--- drivers/net/ethernet/google/gve/gve_tx_dqo.c | 4 +--- 8 files changed, 53 insertions(+), 35 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/g= oogle/gve/gve.h index 970d5ca8..adc91d6a 100644 --- a/drivers/net/ethernet/google/gve/gve.h +++ b/drivers/net/ethernet/google/gve/gve.h @@ -79,8 +79,6 @@ =20 #define GVE_DEFAULT_HEADER_BUFFER_SIZE 128 =20 -#define DQO_QPL_DEFAULT_TX_PAGES 512 - /* Maximum TSO size supported on DQO */ #define GVE_DQO_TX_MAX 0x3FFFF =20 @@ -711,6 +709,7 @@ struct gve_ptype_lut { /* Parameters for allocating resources for tx queues */ struct gve_tx_alloc_rings_cfg { struct gve_tx_queue_config *qcfg; + u16 pages_per_qpl; =20 u16 num_xdp_rings; =20 @@ -726,6 +725,7 @@ struct gve_rx_alloc_rings_cfg { /* tx config is also needed to determine QPL ids */ struct gve_rx_queue_config *qcfg_rx; struct gve_tx_queue_config *qcfg_tx; + u16 pages_per_qpl; =20 u16 ring_size; u16 packet_buffer_size; @@ -816,7 +816,8 @@ struct gve_priv { u16 min_rx_desc_cnt; bool modify_ring_size_enabled; bool default_min_ring_size; - u16 tx_pages_per_qpl; /* Suggested number of pages per qpl for TX queues = by NIC */ + u16 tx_pages_per_qpl; + u16 rx_pages_per_qpl; u64 max_registered_pages; u64 num_registered_pages; /* num pages registered with NIC */ struct bpf_prog *xdp_prog; /* XDP BPF program */ @@ -1150,14 +1151,6 @@ static inline u32 gve_rx_start_qpl_id(const struct g= ve_tx_queue_config *tx_cfg) return gve_get_rx_qpl_id(tx_cfg, 0); } =20 -static inline u32 gve_get_rx_pages_per_qpl_dqo(u32 rx_desc_cnt) -{ - /* For DQO, page count should be more than ring size for - * out-of-order completions. Set it to two times of ring size. - */ - return 2 * rx_desc_cnt; -} - /* Returns the correct dma direction for tx and rx qpls */ static inline enum dma_data_direction gve_qpl_dma_dir(struct gve_priv *pri= v, int id) @@ -1303,6 +1296,9 @@ int gve_reset(struct gve_priv *priv, bool attempt_tea= rdown); void gve_get_curr_alloc_cfgs(struct gve_priv *priv, struct gve_tx_alloc_rings_cfg *tx_alloc_cfg, struct gve_rx_alloc_rings_cfg *rx_alloc_cfg); +void gve_update_num_qpl_pages(struct gve_priv *priv, + struct gve_rx_alloc_rings_cfg *rx_alloc_cfg, + struct gve_tx_alloc_rings_cfg *tx_alloc_cfg); int gve_adjust_config(struct gve_priv *priv, struct gve_tx_alloc_rings_cfg *tx_alloc_cfg, struct gve_rx_alloc_rings_cfg *rx_alloc_cfg); diff --git a/drivers/net/ethernet/google/gve/gve_adminq.c b/drivers/net/eth= ernet/google/gve/gve_adminq.c index f27b9501..b1983f97 100644 --- a/drivers/net/ethernet/google/gve/gve_adminq.c +++ b/drivers/net/ethernet/google/gve/gve_adminq.c @@ -970,14 +970,6 @@ static void gve_enable_supported_features(struct gve_p= riv *priv, priv->dev->max_mtu =3D be16_to_cpu(dev_op_jumbo_frames->max_mtu); } =20 - /* Override pages for qpl for DQO-QPL */ - if (dev_op_dqo_qpl) { - priv->tx_pages_per_qpl =3D - be16_to_cpu(dev_op_dqo_qpl->tx_pages_per_qpl); - if (priv->tx_pages_per_qpl =3D=3D 0) - priv->tx_pages_per_qpl =3D DQO_QPL_DEFAULT_TX_PAGES; - } - if (dev_op_buffer_sizes && (supported_features_mask & GVE_SUP_BUFFER_SIZES_MASK)) { priv->max_rx_buffer_size =3D diff --git a/drivers/net/ethernet/google/gve/gve_buffer_mgmt_dqo.c b/driver= s/net/ethernet/google/gve/gve_buffer_mgmt_dqo.c index 0e2b703c..6880d153 100644 --- a/drivers/net/ethernet/google/gve/gve_buffer_mgmt_dqo.c +++ b/drivers/net/ethernet/google/gve/gve_buffer_mgmt_dqo.c @@ -133,7 +133,7 @@ int gve_alloc_qpl_page_dqo(struct gve_rx_ring *rx, u32 idx; =20 idx =3D rx->dqo.next_qpl_page_idx; - if (idx >=3D gve_get_rx_pages_per_qpl_dqo(priv->rx_desc_cnt)) { + if (idx >=3D priv->rx_pages_per_qpl) { net_err_ratelimited("%s: Out of QPL pages\n", priv->dev->name); return -ENOMEM; diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ether= net/google/gve/gve_main.c index 4feaa481..7a26faeb 100644 --- a/drivers/net/ethernet/google/gve/gve_main.c +++ b/drivers/net/ethernet/google/gve/gve_main.c @@ -965,6 +965,7 @@ static void gve_tx_get_curr_alloc_cfg(struct gve_priv *= priv, cfg->qcfg =3D &priv->tx_cfg; cfg->raw_addressing =3D !gve_is_qpl(priv); cfg->ring_size =3D priv->tx_desc_cnt; + cfg->pages_per_qpl =3D priv->tx_pages_per_qpl; cfg->num_xdp_rings =3D cfg->qcfg->num_xdp_queues; cfg->tx =3D priv->tx; } @@ -996,12 +997,48 @@ static void gve_tx_start_rings(struct gve_priv *priv,= int num_rings) } } =20 +void gve_update_num_qpl_pages(struct gve_priv *priv, + struct gve_rx_alloc_rings_cfg *rx_alloc_cfg, + struct gve_tx_alloc_rings_cfg *tx_alloc_cfg) +{ + u64 ideal_tx_pages, ideal_rx_pages; + u16 tx_num_queues, rx_num_queues; + u64 max_pages, tx_pages; + + if (priv->queue_format =3D=3D GVE_GQI_QPL_FORMAT) { + rx_alloc_cfg->pages_per_qpl =3D rx_alloc_cfg->ring_size; + } else if (priv->queue_format =3D=3D GVE_DQO_QPL_FORMAT) { + /* + * We want 2 pages per RX descriptor and half a page per TX + * descriptor, which means the fraction ideal_tx_pages / + * (ideal_tx_pages + ideal_rx_pages) of the pages we allocate + * should be for TX. Shrink proportionally as necessary to avoid + * allocating more than max_registered_pages total pages. + */ + tx_num_queues =3D tx_alloc_cfg->qcfg->num_queues; + rx_num_queues =3D rx_alloc_cfg->qcfg_rx->num_queues; + + ideal_tx_pages =3D tx_alloc_cfg->ring_size * tx_num_queues / 2; + ideal_rx_pages =3D rx_alloc_cfg->ring_size * rx_num_queues * 2; + max_pages =3D min(priv->max_registered_pages, + ideal_tx_pages + ideal_rx_pages); + + tx_pages =3D (max_pages * ideal_tx_pages) / + (ideal_tx_pages + ideal_rx_pages); + tx_alloc_cfg->pages_per_qpl =3D tx_pages / tx_num_queues; + rx_alloc_cfg->pages_per_qpl =3D (max_pages - tx_pages) / + rx_num_queues; + } +} + static int gve_queues_mem_alloc(struct gve_priv *priv, struct gve_tx_alloc_rings_cfg *tx_alloc_cfg, struct gve_rx_alloc_rings_cfg *rx_alloc_cfg) { int err; =20 + gve_update_num_qpl_pages(priv, rx_alloc_cfg, tx_alloc_cfg); + if (gve_is_gqi(priv)) err =3D gve_tx_alloc_rings_gqi(priv, tx_alloc_cfg); else @@ -1292,6 +1329,7 @@ static void gve_rx_get_curr_alloc_cfg(struct gve_priv= *priv, cfg->raw_addressing =3D !gve_is_qpl(priv); cfg->enable_header_split =3D priv->header_split_enabled; cfg->ring_size =3D priv->rx_desc_cnt; + cfg->pages_per_qpl =3D priv->rx_pages_per_qpl; cfg->packet_buffer_size =3D priv->rx_cfg.packet_buffer_size; cfg->rx =3D priv->rx; cfg->xdp =3D !!cfg->qcfg_tx->num_xdp_queues; @@ -1371,6 +1409,8 @@ static int gve_queues_start(struct gve_priv *priv, priv->rx_cfg =3D *rx_alloc_cfg->qcfg_rx; priv->tx_desc_cnt =3D tx_alloc_cfg->ring_size; priv->rx_desc_cnt =3D rx_alloc_cfg->ring_size; + priv->tx_pages_per_qpl =3D tx_alloc_cfg->pages_per_qpl; + priv->rx_pages_per_qpl =3D rx_alloc_cfg->pages_per_qpl; =20 gve_tx_start_rings(priv, gve_num_tx_queues(priv)); gve_rx_start_rings(priv, rx_alloc_cfg->qcfg_rx->num_queues); diff --git a/drivers/net/ethernet/google/gve/gve_rx.c b/drivers/net/etherne= t/google/gve/gve_rx.c index 9a37bd99..f466fe82 100644 --- a/drivers/net/ethernet/google/gve/gve_rx.c +++ b/drivers/net/ethernet/google/gve/gve_rx.c @@ -277,7 +277,6 @@ int gve_rx_alloc_ring_gqi(struct gve_priv *priv, struct device *hdev =3D &priv->pdev->dev; u32 slots =3D cfg->ring_size; int filled_pages; - int qpl_page_cnt; u32 qpl_id =3D 0; size_t bytes; int err; @@ -313,10 +312,8 @@ int gve_rx_alloc_ring_gqi(struct gve_priv *priv, =20 if (!rx->data.raw_addressing) { qpl_id =3D gve_get_rx_qpl_id(cfg->qcfg_tx, rx->q_num); - qpl_page_cnt =3D cfg->ring_size; - rx->data.qpl =3D gve_alloc_queue_page_list(priv, qpl_id, - qpl_page_cnt); + cfg->pages_per_qpl); if (!rx->data.qpl) { err =3D -ENOMEM; goto abort_with_copy_pool; diff --git a/drivers/net/ethernet/google/gve/gve_rx_dqo.c b/drivers/net/eth= ernet/google/gve/gve_rx_dqo.c index d2f5c2d7..57c45c54 100644 --- a/drivers/net/ethernet/google/gve/gve_rx_dqo.c +++ b/drivers/net/ethernet/google/gve/gve_rx_dqo.c @@ -218,7 +218,6 @@ int gve_rx_alloc_ring_dqo(struct gve_priv *priv, { struct device *hdev =3D &priv->pdev->dev; struct page_pool *pool; - int qpl_page_cnt; size_t size; u32 qpl_id; =20 @@ -246,7 +245,7 @@ int gve_rx_alloc_ring_dqo(struct gve_priv *priv, XSK_CHECK_PRIV_TYPE(struct gve_xdp_buff); =20 rx->dqo.num_buf_states =3D cfg->raw_addressing ? buffer_queue_slots : - gve_get_rx_pages_per_qpl_dqo(cfg->ring_size); + cfg->pages_per_qpl; rx->dqo.buf_states =3D kvcalloc_node(rx->dqo.num_buf_states, sizeof(rx->dqo.buf_states[0]), GFP_KERNEL, priv->numa_node); @@ -281,10 +280,9 @@ int gve_rx_alloc_ring_dqo(struct gve_priv *priv, rx->dqo.page_pool =3D pool; } else { qpl_id =3D gve_get_rx_qpl_id(cfg->qcfg_tx, rx->q_num); - qpl_page_cnt =3D gve_get_rx_pages_per_qpl_dqo(cfg->ring_size); =20 rx->dqo.qpl =3D gve_alloc_queue_page_list(priv, qpl_id, - qpl_page_cnt); + cfg->pages_per_qpl); if (!rx->dqo.qpl) goto err; rx->dqo.next_qpl_page_idx =3D 0; diff --git a/drivers/net/ethernet/google/gve/gve_tx.c b/drivers/net/etherne= t/google/gve/gve_tx.c index 97efc8d2..65401a05 100644 --- a/drivers/net/ethernet/google/gve/gve_tx.c +++ b/drivers/net/ethernet/google/gve/gve_tx.c @@ -264,7 +264,6 @@ static int gve_tx_alloc_ring_gqi(struct gve_priv *priv, int idx) { struct device *hdev =3D &priv->pdev->dev; - int qpl_page_cnt; u32 qpl_id =3D 0; size_t bytes; =20 @@ -291,10 +290,8 @@ static int gve_tx_alloc_ring_gqi(struct gve_priv *priv, tx->dev =3D hdev; if (!tx->raw_addressing) { qpl_id =3D gve_tx_qpl_id(priv, tx->q_num); - qpl_page_cnt =3D priv->tx_pages_per_qpl; - tx->tx_fifo.qpl =3D gve_alloc_queue_page_list(priv, qpl_id, - qpl_page_cnt); + cfg->pages_per_qpl); if (!tx->tx_fifo.qpl) goto abort_with_desc; =20 diff --git a/drivers/net/ethernet/google/gve/gve_tx_dqo.c b/drivers/net/eth= ernet/google/gve/gve_tx_dqo.c index a2b22004..57361406 100644 --- a/drivers/net/ethernet/google/gve/gve_tx_dqo.c +++ b/drivers/net/ethernet/google/gve/gve_tx_dqo.c @@ -302,7 +302,6 @@ static int gve_tx_alloc_ring_dqo(struct gve_priv *priv, { struct device *hdev =3D &priv->pdev->dev; int num_pending_packets; - int qpl_page_cnt; size_t bytes; u32 qpl_id; int i; @@ -384,10 +383,9 @@ static int gve_tx_alloc_ring_dqo(struct gve_priv *priv, =20 if (!cfg->raw_addressing) { qpl_id =3D gve_tx_qpl_id(priv, tx->q_num); - qpl_page_cnt =3D priv->tx_pages_per_qpl; =20 tx->dqo.qpl =3D gve_alloc_queue_page_list(priv, qpl_id, - qpl_page_cnt); + cfg->pages_per_qpl); if (!tx->dqo.qpl) goto err; =20 --=20 2.53.0.239.g8d8fc8a987-goog From nobody Sat Feb 7 18:20:32 2026 Received: from mail-pg1-f201.google.com (mail-pg1-f201.google.com [209.85.215.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EF61F1E633C for ; Sat, 7 Feb 2026 01:17:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770427069; cv=none; b=PSMDSh6NQoWkWZX2SZjl/IJ0AMMEStwMbdF1d5oKvY5iVVi8E84tV1XkcDPCi/RSXxaf/9bU+cJrkke+tteuXmTlsls5gKHdkQBCtghACwbHt1JThBCriCxTDaWn0U9pwGDG6F58uVoovo3RMT9bAUX6jogaA07yywDhLrmjy0Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770427069; c=relaxed/simple; bh=n1FpuRLPegsOCgW5Ep3001MoHdtE0ua5INU+vaHadF0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=YzGl1hg5StcE5vzAqUx7k0D/4/NOt8loUc8jIIc2S4NGbI8e+h6rk93NLAfwH3f+HXXtSTGuwdIFsLtADRv6jD0yixmZuy8gSi85GSfy8FAVygORo1spxXOqzPdFxEWCUja5EaVPMdtxEPN0m0fWaJaRAtqsoGcOMCWP67zAhU8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--joshwash.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=meE/v4m/; arc=none smtp.client-ip=209.85.215.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--joshwash.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="meE/v4m/" Received: by mail-pg1-f201.google.com with SMTP id 41be03b00d2f7-c56848e6f53so1589046a12.0 for ; Fri, 06 Feb 2026 17:17:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1770427068; x=1771031868; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=+Gk4dBgGZGggyHis2Av7BN/ugN1ReCsTcFsztI1UNdI=; b=meE/v4m/cSEFhZlgF8nKK7Y2OG6lQKUblSLGwn59MoUDXzUTmp+XREWuNmisSbo9Nr Bm853gCUc3xlcNX4s47CLBx01RIkT3fuyH4rhZY2PTchQjW0VdFz20FwyfgEK+7aZaA2 d/PjPXmZbiNQxnw6acUUVjSOen7Ca5pxj3Xafa3T2xIteoJPx/Pit00gNqc1sSpQbkvx M/MlGXx3hLu28e7PZT/y6zfp7gwtNGRX7hAO/j7EVtUSTlYkxX+/JKO/TK23CsIzdezl eing0cJyy+yv6TZ/Q8lrg2c/ah11kgbmaoOWK5DZYrBgOcrQZlyeZ9AO5E+6z8HEyBne xi4w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1770427068; x=1771031868; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=+Gk4dBgGZGggyHis2Av7BN/ugN1ReCsTcFsztI1UNdI=; b=u4cVN/tOzNCG1JJX+6JdbGw1Z9NSbwdj2TzhulXifeLt66GhlvUDWCNewnLjmKwhEJ oyjeAV3Kwj7pCVEFBKOGha0L4Q8txaBmTBqshavNconL2FfbBuR7kImzBUcrU1aRoCfM D5LBxlGYi3ygmaU4qSN7z5T6rabzYx0hV4eZ5YKppery2wsPn03ICkixJyt/lNpSM2q/ Lldxj/IUKgphiGDZ6gxh4dD3LQCj7a59UnXLuPyzmE4tjetGDzVfrh0rlXTA9hpdd/mN X0dkiBvcGWYJnn1KSTMQV/eee/WVWJPDPyDcWjMtZ4NSCB8pOT08TITAQMqefj80/pGf iUpg== X-Forwarded-Encrypted: i=1; AJvYcCXEtO/zs2+o1f/gwODOeHH+5uG/RgF5/a7kHkR7AJYeqcrJ/m0Iqfpr5jGtzMHoai8J41yxI4taVsmNfIY=@vger.kernel.org X-Gm-Message-State: AOJu0YxzBCn4MknTOBTGWTkTjfhXxmJWVAkV+YHe7jyggOFiEXpprc5U /u6NXrgmZ+QRcBWk3Hu/98iacXslf/ccsDmEVshlkVRDTJsLn37NNvuDObO6/g7HhjUcXa6mNBu iWOefmmlamqal2g== X-Received: from pfkk18.prod.google.com ([2002:aa7:90d2:0:b0:823:1090:870e]) (user=joshwash job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a00:12d7:b0:823:1094:243c with SMTP id d2e1a72fcca58-824417b3888mr4180465b3a.68.1770427068160; Fri, 06 Feb 2026 17:17:48 -0800 (PST) Date: Fri, 6 Feb 2026 17:17:33 -0800 In-Reply-To: <20260207011734.437205-1-joshwash@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260207011734.437205-1-joshwash@google.com> X-Mailer: git-send-email 2.53.0.rc2.204.g2597b5adb4-goog Message-ID: <20260207011734.437205-3-joshwash@google.com> Subject: [PATCH net-next 2/2] gve: Enable reading max ring size from the device in DQO-QPL mode From: Joshua Washington To: netdev@vger.kernel.org Cc: Joshua Washington , Harshitha Ramamurthy , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Willem de Bruijn , Praveen Kaligineedi , Ziwei Xiao , John Fraker , Matt Olson , Bailey Forrest , Tim Hostetler , Jordan Rhee , linux-kernel@vger.kernel.org, Max Yuan Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Matt Olson The gVNIC device indicates a device option (MODIFY_RING) to the driver, which presents a range of ring sizes from which the user is allowed to select. But in DQO-QPL queue format, the driver ignores the "max" of this range and instead allows the user to configure the ring size in the range [min, default]. This was done because increasing the ring size could result in the number of registered pages being higher than the max allowed by the device. In order to support large ring sizes, stop ignoring the "max" of the range presented in the MODIFY_RING option. Signed-off-by: Matt Olson Signed-off-by: Max Yuan Reviewed-by: Jordan Rhee Reviewed-by: Harshitha Ramamurthy Reviewed-by: Praveen Kaligineedi Signed-off-by: Joshua Washington --- drivers/net/ethernet/google/gve/gve_adminq.c | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve_adminq.c b/drivers/net/eth= ernet/google/gve/gve_adminq.c index b1983f97..08587bf4 100644 --- a/drivers/net/ethernet/google/gve/gve_adminq.c +++ b/drivers/net/ethernet/google/gve/gve_adminq.c @@ -989,12 +989,10 @@ static void gve_enable_supported_features(struct gve_= priv *priv, if (dev_op_modify_ring && (supported_features_mask & GVE_SUP_MODIFY_RING_MASK)) { priv->modify_ring_size_enabled =3D true; - - /* max ring size for DQO QPL should not be overwritten because of device= limit */ - if (priv->queue_format !=3D GVE_DQO_QPL_FORMAT) { - priv->max_rx_desc_cnt =3D be16_to_cpu(dev_op_modify_ring->max_rx_ring_s= ize); - priv->max_tx_desc_cnt =3D be16_to_cpu(dev_op_modify_ring->max_tx_ring_s= ize); - } + priv->max_rx_desc_cnt =3D + be16_to_cpu(dev_op_modify_ring->max_rx_ring_size); + priv->max_tx_desc_cnt =3D + be16_to_cpu(dev_op_modify_ring->max_tx_ring_size); if (priv->default_min_ring_size) { /* If device hasn't provided minimums, use default minimums */ priv->min_tx_desc_cnt =3D GVE_DEFAULT_MIN_TX_RING_SIZE; --=20 2.53.0.239.g8d8fc8a987-goog