From nobody Wed Dec 17 10:46:35 2025 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B65E812FF69 for ; Fri, 21 Mar 2025 00:29:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742516969; cv=none; b=E+LsHWCR8iIU6Bf3yMlHnCb2XXaH6C1H/DXVh3Hl4N+G5PGyrTTe06ZbITsBXs6/pBKKI6Q/uyADM+8cFnQPSCof3/LnHEoUnxSxQfN3J8KGJsA9VPOxKwL0wpkmFNfSe2funrCGhPab5nNnrHAkDIxCYGxaj8kquHqd9mu8Jao= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742516969; c=relaxed/simple; bh=QnQjmOUMdu0LaJsPwkMYaV067bT5nlIwdVbWjgqde2o=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=kIASTOIq7iF+BepMEQXi1yzsf90fO1DFMp/HwElzXZdp8tRwPmAkdJidETZRyj14wCzn1VspZTT8bJUTK3HMS6+WKhvBryaVyp3Ye1mFzeP3UHsLuA2SYZfb6AOrNvjcQVyadyW7Y+ep1ZaVfqFTXjXXAxjLSuHKqsD77BDfGTI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--hramamurthy.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=NqHOlz6t; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--hramamurthy.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="NqHOlz6t" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2ff68033070so1949275a91.2 for ; Thu, 20 Mar 2025 17:29:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1742516967; x=1743121767; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=l1LG5VQljoMlrOtBLZUnQ7XLakC4CYuuHpRrE00Qlhw=; b=NqHOlz6ty4pwNBdXNiXKEI7A6Rh4eIcw6/cxKVLPKjKYFpcD7vtacIornRVpQKq12C KQmo2/fl+uNgwNxGYHMEZUu6KiycgRygWlFS43qTzlXyrd7oV99dsP23yOpzlzYEtQpo fVTQBBKskhB3M1wmsJ8ib4v50h0ikd6jq7pCip9uXqrujzdhWsoWgIpbRuWjNdgJA6dE /SVE01DFerm86lrHZ62LrLZHoQdZp2uEwGZKsv8CRWvv11CuTLB8pRt1tmOoUX8bSlcr dN09KZQ4HarXQO9WJOHFiaCxN5ze3SC7fpf7twFQxCAjtU8cAk51Bxz3uL5GxMKDm9lf ZNQQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742516967; x=1743121767; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=l1LG5VQljoMlrOtBLZUnQ7XLakC4CYuuHpRrE00Qlhw=; b=eVGQNqmURECZ3kqXeusJ7eRZsnL4LKTRfaQ0EWYAXhyccWUoxYowhW/qk1Wr5pzN/t Qc40ORiMsPsHVtF7JjnTE6gNf1gEk0RcQXmNeBWrtOlg4MeFagDTvUIXwUgGB0JS5Prs qJAA1XSLWENi65486voYZxGNjGzWvespuLlTk2DgfJCj7LFuk1Tr2wLv3z3UKEwf0Qp5 oqFBfZ/UtYEMMn6fHa2CfjwpApwL7E33casMc7VIXndd/pEA7wNYJLZm7JfmBA87Wdy4 ISndq6M6IlZ75/sR6zbNAe9e2I2ugk7EGkTGRlztrwFGjhuTRVdZswW9523MfWrCqUFE DE7g== X-Forwarded-Encrypted: i=1; AJvYcCXUsQf2eOuLuJpkQ/M6Vu7BGwDasydfDLuZTToBvrNlpPH5jkzVElkItWa4GUWTcSBsq+UKZrpRPpdvR+o=@vger.kernel.org X-Gm-Message-State: AOJu0Yw7ZukObCpfJc7qmgJN3NQjOxZHk81RKQXon0mt5Ce3/IHwkO7o ELOAMq/M76wNlu0MtdQ6IA4rIYbf3OgoUd13eE057Lb8ws/+CprU2ze3v21ppkqKd0XAASDtvGs gim1+1nLR0DvPsxz3IcsD7g== X-Google-Smtp-Source: AGHT+IEadiXXjxt2+O81sSe0Uw9K8c8HTLQIJNQsl41h2NtHb9fpJuhccsgilZ6lbCG9EgWhslLn69uR/jdkP0Xg1Q== X-Received: from pjbpl11.prod.google.com ([2002:a17:90b:268b:b0:301:1bf5:2efc]) (user=hramamurthy job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:38cb:b0:2fe:b907:3b05 with SMTP id 98e67ed59e1d1-3030ff0d9f3mr1421101a91.29.1742516966814; Thu, 20 Mar 2025 17:29:26 -0700 (PDT) Date: Fri, 21 Mar 2025 00:29:07 +0000 In-Reply-To: <20250321002910.1343422-1-hramamurthy@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250321002910.1343422-1-hramamurthy@google.com> X-Mailer: git-send-email 2.49.0.395.g12beb8f557-goog Message-ID: <20250321002910.1343422-4-hramamurthy@google.com> Subject: [PATCH net-next 3/6] gve: update GQ RX to use buf_size From: Harshitha Ramamurthy To: netdev@vger.kernel.org Cc: jeroendb@google.com, hramamurthy@google.com, andrew+netdev@lunn.ch, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, ast@kernel.org, daniel@iogearbox.net, hawk@kernel.org, john.fastabend@gmail.com, pkaligineedi@google.com, willemb@google.com, ziweixiao@google.com, joshwash@google.com, horms@kernel.org, shailend@google.com, bcf@google.com, linux-kernel@vger.kernel.org, bpf@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Joshua Washington Commit ebdfae0d377b ("gve: adopt page pool for DQ RDA mode") introduced a buf_size field to the gve_rx_slot_page_info struct, which can be used in the datapath to take the place of the packet_buffer_size field, as it will already be hot in the cache due to its extensive use. Using the buf_size field in the datapath frees up the packet_buffer_size field in the GQ-specific RX cacheline to be generalized for GQ and DQ (in the next patch), as there is currently no common packet buffer size field between the two queue formats. Reviewed-by: Willem de Bruijn Signed-off-by: Joshua Washington Signed-off-by: Harshitha Ramamurthy --- drivers/net/ethernet/google/gve/gve_rx.c | 24 +++++++++++++++--------- 1 file changed, 15 insertions(+), 9 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve_rx.c b/drivers/net/etherne= t/google/gve/gve_rx.c index 7b774cc510cc..9d444e723fcd 100644 --- a/drivers/net/ethernet/google/gve/gve_rx.c +++ b/drivers/net/ethernet/google/gve/gve_rx.c @@ -141,12 +141,15 @@ void gve_rx_free_ring_gqi(struct gve_priv *priv, stru= ct gve_rx_ring *rx, netif_dbg(priv, drv, priv->dev, "freed rx ring %d\n", idx); } =20 -static void gve_setup_rx_buffer(struct gve_rx_slot_page_info *page_info, - dma_addr_t addr, struct page *page, __be64 *slot_addr) +static void gve_setup_rx_buffer(struct gve_rx_ring *rx, + struct gve_rx_slot_page_info *page_info, + dma_addr_t addr, struct page *page, + __be64 *slot_addr) { page_info->page =3D page; page_info->page_offset =3D 0; page_info->page_address =3D page_address(page); + page_info->buf_size =3D rx->packet_buffer_size; *slot_addr =3D cpu_to_be64(addr); /* The page already has 1 ref */ page_ref_add(page, INT_MAX - 1); @@ -171,7 +174,7 @@ static int gve_rx_alloc_buffer(struct gve_priv *priv, s= truct device *dev, return err; } =20 - gve_setup_rx_buffer(page_info, dma, page, &data_slot->addr); + gve_setup_rx_buffer(rx, page_info, dma, page, &data_slot->addr); return 0; } =20 @@ -199,7 +202,8 @@ static int gve_rx_prefill_pages(struct gve_rx_ring *rx, struct page *page =3D rx->data.qpl->pages[i]; dma_addr_t addr =3D i * PAGE_SIZE; =20 - gve_setup_rx_buffer(&rx->data.page_info[i], addr, page, + gve_setup_rx_buffer(rx, &rx->data.page_info[i], addr, + page, &rx->data.data_ring[i].qpl_offset); continue; } @@ -222,6 +226,7 @@ static int gve_rx_prefill_pages(struct gve_rx_ring *rx, rx->qpl_copy_pool[j].page =3D page; rx->qpl_copy_pool[j].page_offset =3D 0; rx->qpl_copy_pool[j].page_address =3D page_address(page); + rx->qpl_copy_pool[j].buf_size =3D rx->packet_buffer_size; =20 /* The page already has 1 ref. */ page_ref_add(page, INT_MAX - 1); @@ -283,6 +288,7 @@ int gve_rx_alloc_ring_gqi(struct gve_priv *priv, =20 rx->gve =3D priv; rx->q_num =3D idx; + rx->packet_buffer_size =3D GVE_DEFAULT_RX_BUFFER_SIZE; =20 rx->mask =3D slots - 1; rx->data.raw_addressing =3D cfg->raw_addressing; @@ -351,7 +357,6 @@ int gve_rx_alloc_ring_gqi(struct gve_priv *priv, rx->db_threshold =3D slots / 2; gve_rx_init_ring_state_gqi(rx); =20 - rx->packet_buffer_size =3D GVE_DEFAULT_RX_BUFFER_SIZE; gve_rx_ctx_clear(&rx->ctx); =20 return 0; @@ -590,7 +595,7 @@ static struct sk_buff *gve_rx_copy_to_pool(struct gve_r= x_ring *rx, copy_page_info->pad =3D page_info->pad; =20 skb =3D gve_rx_add_frags(napi, copy_page_info, - rx->packet_buffer_size, len, ctx); + copy_page_info->buf_size, len, ctx); if (unlikely(!skb)) return NULL; =20 @@ -630,7 +635,8 @@ gve_rx_qpl(struct device *dev, struct net_device *netde= v, * device. */ if (page_info->can_flip) { - skb =3D gve_rx_add_frags(napi, page_info, rx->packet_buffer_size, len, c= tx); + skb =3D gve_rx_add_frags(napi, page_info, page_info->buf_size, + len, ctx); /* No point in recycling if we didn't get the skb */ if (skb) { /* Make sure that the page isn't freed. */ @@ -680,7 +686,7 @@ static struct sk_buff *gve_rx_skb(struct gve_priv *priv= , struct gve_rx_ring *rx, skb =3D gve_rx_raw_addressing(&priv->pdev->dev, netdev, page_info, len, napi, data_slot, - rx->packet_buffer_size, ctx); + page_info->buf_size, ctx); } else { skb =3D gve_rx_qpl(&priv->pdev->dev, netdev, rx, page_info, len, napi, data_slot); @@ -855,7 +861,7 @@ static void gve_rx(struct gve_rx_ring *rx, netdev_featu= res_t feat, void *old_data; int xdp_act; =20 - xdp_init_buff(&xdp, rx->packet_buffer_size, &rx->xdp_rxq); + xdp_init_buff(&xdp, page_info->buf_size, &rx->xdp_rxq); xdp_prepare_buff(&xdp, page_info->page_address + page_info->page_offset, GVE_RX_PAD, len, false); --=20 2.49.0.rc1.451.g8f38331e32-goog