From nobody Thu Oct 9 08:47:44 2025 Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 297F721B9F4 for ; Wed, 18 Jun 2025 20:56:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750280180; cv=none; b=QpoVKZizb13wCKlMozhYwsS1llV1DqvATf1QhQuz8xzGoot0H5StQE/ZiLaLm0KI787a936hUXk/G+PvHvxEh7UwTirucZmPAWGbRfPS8znJ+pQ4tvZisur94J/OW8nNpSiJpNtxySH0I8lUnkeuZ4nILD3ccarZia09gspiZe8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750280180; c=relaxed/simple; bh=Y7TYJoo+PqfWWYDcqEA8qnLm+FStCItkUlgkiWQQpyo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=CWbY19p010/Tf2eKb8orIhE3iuXWPjv/zFx7dxROLPN2KdOdanGXyYHJLQyv8QCYrpCFEqnUpCEho1caQtjUyqkozCEDlUyhUBQ+9+8rYHSsv0KAHZhSeRzinWPg+uAaW/FpS7WUkBvHk0QE9MDJlD+JhzRtLdGphWETbjjZovc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--hramamurthy.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=xiYls1HI; arc=none smtp.client-ip=209.85.210.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--hramamurthy.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="xiYls1HI" Received: by mail-pf1-f201.google.com with SMTP id d2e1a72fcca58-748475d2a79so39797b3a.3 for ; Wed, 18 Jun 2025 13:56:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1750280178; x=1750884978; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=9nqNfXinKiAcfUjzfMzeZv8PmgH35W5nRu9Qiwo39lQ=; b=xiYls1HIGu33jfsKw7P7ZeeV6sB51u0IDSHY7oZnn4dPT2kLvpnC8M/L/1TV/oyDSg OHGWZ9nblkPYqv4RAZKJE+QYVhqn73Hyq/4hQFwsRrNAmZJNT4Z6/m8m1un4rQqJmsVC DvOYL5i9HK1LTmk0nye+MkOlrbOsVigmcWKv9EONO3kE1H5Ki8x31y6bnGkNMKp/w56x dJhMWRHaLGyQVSyhg3tF8DcDe1PsNJi3v5WOqxzDT2ZBvN8jaY/PxsEHMnVAD+fdF25O YYlTb9u4q02hoRmcFPg5dnA82/avlxR/++Tik2kyqRimrSW16goheHyGubfkOxr4Im9y QbFQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750280178; x=1750884978; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=9nqNfXinKiAcfUjzfMzeZv8PmgH35W5nRu9Qiwo39lQ=; b=U6SxCvBg/Ffhp84apZNFtGGyDeVVUAd78P40wtaP42Ohd0Nbvws4CDs4PdgErIvIpA Xoj4e6eLLeBlDfv1geb5Lv5HM6T6R1Uz0Y91aSpfTzqzJdm0FcFoXqlXB6udnVvtvNZz RGCWaIK8NDifvchQ9+fcTV9hGxkErP4eA5NGPxEn4rfQ5eiaHiDNlX2HqbCppOHelR5y /bDRoC5JlELXVR35bxxAGKtH/dVsuk1oaum32pRCT7x5mF5QZYoJUqHt7ITbxPN4PvOL d5sIsNUq4mAYe2KBgw4wGymQ780pmgFsyxpTbw58iB25uMqlF4QulCjKCfLs4TEuIM8O PMqg== X-Forwarded-Encrypted: i=1; AJvYcCVfUuoazMz750vjWIc0SMV8sfpfBQbkE8tUNRFDckw+wLXk8j4P8m5GBfRHGeTDiwlv5FH7EuRFz00LvH4=@vger.kernel.org X-Gm-Message-State: AOJu0YyQ378XTs8jQtUZCVkfcGV2OtakMno49r46wB96WMDvk343fC4C BJKKAfrk9UBtWbB8DwKUUJp8avzERglrBxsqGegZHBQX8PHqaIaKisOnkBL9Uzi3qJJULt3XDXO IQKbQjAKdV9/RaXOaV6Vifk9N1g== X-Google-Smtp-Source: AGHT+IGwt1EXS28Cr28WyQkNs/jZ1HQZD9gTvDXgPsRQD7GAMFhXHVkEeKdEkaToyx+A6dh8M93uXUt1cxPfkyWiYw== X-Received: from pfbjo20.prod.google.com ([2002:a05:6a00:9094:b0:747:bd3b:4b63]) (user=hramamurthy job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a00:39a0:b0:742:b3a6:db16 with SMTP id d2e1a72fcca58-7489cfe28famr27612874b3a.20.1750280178421; Wed, 18 Jun 2025 13:56:18 -0700 (PDT) Date: Wed, 18 Jun 2025 20:56:11 +0000 In-Reply-To: <20250618205613.1432007-1-hramamurthy@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250618205613.1432007-1-hramamurthy@google.com> X-Mailer: git-send-email 2.50.0.714.g196bf9f422-goog Message-ID: <20250618205613.1432007-2-hramamurthy@google.com> Subject: [PATCH net-next 1/3] gve: rename gve_xdp_xmit to gve_xdp_xmit_gqi From: Harshitha Ramamurthy To: netdev@vger.kernel.org Cc: jeroendb@google.com, hramamurthy@google.com, andrew+netdev@lunn.ch, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, ast@kernel.org, daniel@iogearbox.net, hawk@kernel.org, john.fastabend@gmail.com, sdf@fomichev.me, willemb@google.com, ziweixiao@google.com, pkaligineedi@google.com, joshwash@google.com, linux-kernel@vger.kernel.org, bpf@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Joshua Washington In preparation for XDP DQ support, the gve_xdp_xmit callback needs to be generalized for all queue formats. This patch renames the GQ-specific function to gve_xdp_xmit_gqi, and introduces a new gve_xdp_xmit callback which branches on queue format. Reviewed-by: Willem de Bruijn Signed-off-by: Joshua Washington Signed-off-by: Harshitha Ramamurthy --- drivers/net/ethernet/google/gve/gve.h | 4 ++-- drivers/net/ethernet/google/gve/gve_main.c | 10 ++++++++++ drivers/net/ethernet/google/gve/gve_tx.c | 4 ++-- 3 files changed, 14 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/g= oogle/gve/gve.h index 4469442d4940..de1fc23c44f9 100644 --- a/drivers/net/ethernet/google/gve/gve.h +++ b/drivers/net/ethernet/google/gve/gve.h @@ -1178,8 +1178,8 @@ void gve_free_queue_page_list(struct gve_priv *priv, u32 id); /* tx handling */ netdev_tx_t gve_tx(struct sk_buff *skb, struct net_device *dev); -int gve_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames, - u32 flags); +int gve_xdp_xmit_gqi(struct net_device *dev, int n, struct xdp_frame **fra= mes, + u32 flags); int gve_xdp_xmit_one(struct gve_priv *priv, struct gve_tx_ring *tx, void *data, int len, void *frame_p); void gve_xdp_tx_flush(struct gve_priv *priv, u32 xdp_qid); diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ether= net/google/gve/gve_main.c index 28e4795f5f40..eff970124dba 100644 --- a/drivers/net/ethernet/google/gve/gve_main.c +++ b/drivers/net/ethernet/google/gve/gve_main.c @@ -1516,6 +1516,16 @@ static int gve_set_xdp(struct gve_priv *priv, struct= bpf_prog *prog, return err; } =20 +static int gve_xdp_xmit(struct net_device *dev, int n, + struct xdp_frame **frames, u32 flags) +{ + struct gve_priv *priv =3D netdev_priv(dev); + + if (gve_is_gqi(priv)) + return gve_xdp_xmit_gqi(dev, n, frames, flags); + return -EOPNOTSUPP; +} + static int gve_xsk_pool_enable(struct net_device *dev, struct xsk_buff_pool *pool, u16 qid) diff --git a/drivers/net/ethernet/google/gve/gve_tx.c b/drivers/net/etherne= t/google/gve/gve_tx.c index 1b40bf0c811a..c6ff0968929d 100644 --- a/drivers/net/ethernet/google/gve/gve_tx.c +++ b/drivers/net/ethernet/google/gve/gve_tx.c @@ -823,8 +823,8 @@ static int gve_tx_fill_xdp(struct gve_priv *priv, struc= t gve_tx_ring *tx, return ndescs; } =20 -int gve_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames, - u32 flags) +int gve_xdp_xmit_gqi(struct net_device *dev, int n, struct xdp_frame **fra= mes, + u32 flags) { struct gve_priv *priv =3D netdev_priv(dev); struct gve_tx_ring *tx; --=20 2.50.0.rc2.761.g2dc52ea45b-goog From nobody Thu Oct 9 08:47:44 2025 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2E02821E08B for ; Wed, 18 Jun 2025 20:56:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750280182; cv=none; b=tYTxWR2CWCCbcL6Ie/SmXPzHIB9uwZdafUyNdljwGM876RJ6vw69Jd77Vl0FzxAnYIpiOkLCkdwI1OYzz3u4vuLrISyOlh5Y5yqPqCVuiSdU/S+oMeDh05xoGZnrzU3u2FfSGd8rE4cMgF1HLFwe/Sp72D3lYS0Lar+O7HCVT5U= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750280182; c=relaxed/simple; bh=lyvRRnsp6l5pRugPAYHMj7B0NyqWs1vdYXMkpdEORdA=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=HNQUSTjqP/WzPIOrRCqdafuzU//gE4C5IpapbSLOHnWQ5L8wLIZvziqr0EmGhI8i3tjjC3c91+xG4xwvbgspyCAf172TwWzcv+VxtxZsMyfqHUQSsgvAUtcRcR0yGfKUwNWQ6Bpsa3RCpql7TY7PVwavEmFaSKVHuWWCS5twQ3c= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--hramamurthy.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=whRoxbmN; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--hramamurthy.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="whRoxbmN" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-3122368d82bso28371a91.0 for ; Wed, 18 Jun 2025 13:56:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1750280180; x=1750884980; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ohnc9c2GBtg2/C577rU/2xZdRfc1XtYqG1puYAyumB4=; b=whRoxbmNirmPItd35cyuHOVYdZUlsmHjl2DPuVM2kABf36WYyM3dCQkMJaAu4GNxaf R5/hX8ZzFZeIPWP7/8zPEIRlNIlj8XhhfKQpTnE0Gsht6hHfWUuJCKZO4hv1jz/DLnuW cIu0WjTccAyPwg551308oGLxebGWJYUOgujYnIMmTbiasaRwwsrMexStwRLtvuzPr1yT +KAxJVd/dEY16CY5joGqzMfg/ylgvQmTU/yW8vDu683OzVC3JdvAk9s7e7pIdz+Pw5mi VUFQt+biG67iGzE5MDjn54N+H+iSYck6XhpJCkmCE4mSgZnypYcdRT8yQ09A/QnEWs+Q pNpw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750280180; x=1750884980; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ohnc9c2GBtg2/C577rU/2xZdRfc1XtYqG1puYAyumB4=; b=Wux0OewJNr4kMiFkeEx9awUFtnGCHBGRPDWQ2uJU2RO4Td2EdM23QJEW9cITGvaC2C gMWbrY3hUOqucwcYUR3107zFnEmuUB3G28s372dQEeNAzhlVQfmsKvHQ0/HKUOLHwj32 gSYFWSvpuD1dV+4EhTF77lmD73Wj8RT3pAcBpHHduJWeYP/YYN1u2/MYLxLo3EShcZY9 Ho8GYsOx1RyMdh4Xk0juik0x3r+ZsbfpkVq4+yEw8YDdABr+pEtW1+mYY6XmJfBRvLHx dE9YWTdeqYuyWbKKCfTEfr1d46PQZgOHQNefRbXJrQuKH7qx3X3gWT8LtmtSIm3wvvAV VBWA== X-Forwarded-Encrypted: i=1; AJvYcCVh0rhlLfAjymhJidLx8eGCf++0d2V0RLOHryU4XZmd6dzP+jNmTOllK3AoYr4kM35JvWGauPe0574cPLw=@vger.kernel.org X-Gm-Message-State: AOJu0YwTom6jPqMMGZ2IjXuQy0iYvkM1jfsYC/FO9hlHpxZJB+B2GBc2 NRzasNHdAuP6ADTrUiNYM5NXYgPmqpVstdQH7lBNQ8BQ09IhfsUErdWcUjtmHLWzqvcRxOvwe53 PKCbx1AMSXQxPSMZWlL/D5Cz20g== X-Google-Smtp-Source: AGHT+IGJrhDkOKYSMqJVtNS/B0KUQnbW9kBeCVjsBgIy3WneC6k4wWY5cCPgRhv4jJEF246l/S5gJOLXmMBnxRQNkw== X-Received: from pjbqo12.prod.google.com ([2002:a17:90b:3dcc:b0:311:f699:df0a]) (user=hramamurthy job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:274d:b0:312:959:dc4d with SMTP id 98e67ed59e1d1-313f1beafdcmr29661318a91.7.1750280180303; Wed, 18 Jun 2025 13:56:20 -0700 (PDT) Date: Wed, 18 Jun 2025 20:56:12 +0000 In-Reply-To: <20250618205613.1432007-1-hramamurthy@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250618205613.1432007-1-hramamurthy@google.com> X-Mailer: git-send-email 2.50.0.714.g196bf9f422-goog Message-ID: <20250618205613.1432007-3-hramamurthy@google.com> Subject: [PATCH net-next 2/3] gve: refactor DQO TX methods to be more generic for XDP From: Harshitha Ramamurthy To: netdev@vger.kernel.org Cc: jeroendb@google.com, hramamurthy@google.com, andrew+netdev@lunn.ch, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, ast@kernel.org, daniel@iogearbox.net, hawk@kernel.org, john.fastabend@gmail.com, sdf@fomichev.me, willemb@google.com, ziweixiao@google.com, pkaligineedi@google.com, joshwash@google.com, linux-kernel@vger.kernel.org, bpf@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Joshua Washington This patch performs various minor DQO TX datapath refactors in preparation for adding XDP_TX and XDP_REDIRECT support. The following refactors are performed: 1) gve_tx_fill_pkt_desc_dqo() relies on a SKB pointer to get whether checksum offloading should be enabled. This won't work for the XDP case, which does not have a SKB. This patch updates the method to use a boolean representing whether checksum offloading should be enabled directly. 2) gve_maybe_stop_dqo() contains some synchronization between the true TX head and the cached value, a synchronization which is common for XDP queues and normal netdev queues. However, that method is reserved for netdev TX queues. To avoid duplicate code, this logic is factored out into a new method, gve_has_tx_slots_available(). 3) gve_tx_update_tail() is added to update the TX tail, a functionality that will be common between normal TX and XDP TX codepaths. Reviewed-by: Willem de Bruijn Signed-off-by: Joshua Washington Signed-off-by: Praveen Kaligineedi Signed-off-by: Harshitha Ramamurthy --- drivers/net/ethernet/google/gve/gve_tx_dqo.c | 85 +++++++++++--------- 1 file changed, 47 insertions(+), 38 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve_tx_dqo.c b/drivers/net/eth= ernet/google/gve/gve_tx_dqo.c index 9d705d94b065..ba6b5cdaa922 100644 --- a/drivers/net/ethernet/google/gve/gve_tx_dqo.c +++ b/drivers/net/ethernet/google/gve/gve_tx_dqo.c @@ -439,12 +439,28 @@ static u32 num_avail_tx_slots(const struct gve_tx_rin= g *tx) return tx->mask - num_used; } =20 +/* Checks if the requested number of slots are available in the ring */ +static bool gve_has_tx_slots_available(struct gve_tx_ring *tx, u32 slots_r= eq) +{ + u32 num_avail =3D num_avail_tx_slots(tx); + + slots_req +=3D GVE_TX_MIN_DESC_PREVENT_CACHE_OVERLAP; + + if (num_avail >=3D slots_req) + return true; + + /* Update cached TX head pointer */ + tx->dqo_tx.head =3D atomic_read_acquire(&tx->dqo_compl.hw_tx_head); + + return num_avail_tx_slots(tx) >=3D slots_req; +} + static bool gve_has_avail_slots_tx_dqo(struct gve_tx_ring *tx, int desc_count, int buf_count) { return gve_has_pending_packet(tx) && - num_avail_tx_slots(tx) >=3D desc_count && - gve_has_free_tx_qpl_bufs(tx, buf_count); + gve_has_tx_slots_available(tx, desc_count) && + gve_has_free_tx_qpl_bufs(tx, buf_count); } =20 /* Stops the queue if available descriptors is less than 'count'. @@ -453,12 +469,6 @@ static bool gve_has_avail_slots_tx_dqo(struct gve_tx_r= ing *tx, static int gve_maybe_stop_tx_dqo(struct gve_tx_ring *tx, int desc_count, int buf_count) { - if (likely(gve_has_avail_slots_tx_dqo(tx, desc_count, buf_count))) - return 0; - - /* Update cached TX head pointer */ - tx->dqo_tx.head =3D atomic_read_acquire(&tx->dqo_compl.hw_tx_head); - if (likely(gve_has_avail_slots_tx_dqo(tx, desc_count, buf_count))) return 0; =20 @@ -472,8 +482,6 @@ static int gve_maybe_stop_tx_dqo(struct gve_tx_ring *tx, /* After stopping queue, check if we can transmit again in order to * avoid TOCTOU bug. */ - tx->dqo_tx.head =3D atomic_read_acquire(&tx->dqo_compl.hw_tx_head); - if (likely(!gve_has_avail_slots_tx_dqo(tx, desc_count, buf_count))) return -EBUSY; =20 @@ -500,11 +508,9 @@ static void gve_extract_tx_metadata_dqo(const struct s= k_buff *skb, } =20 static void gve_tx_fill_pkt_desc_dqo(struct gve_tx_ring *tx, u32 *desc_idx, - struct sk_buff *skb, u32 len, u64 addr, + bool enable_csum, u32 len, u64 addr, s16 compl_tag, bool eop, bool is_gso) { - const bool checksum_offload_en =3D skb->ip_summed =3D=3D CHECKSUM_PARTIAL; - while (len > 0) { struct gve_tx_pkt_desc_dqo *desc =3D &tx->dqo.tx_ring[*desc_idx].pkt; @@ -515,7 +521,7 @@ static void gve_tx_fill_pkt_desc_dqo(struct gve_tx_ring= *tx, u32 *desc_idx, .buf_addr =3D cpu_to_le64(addr), .dtype =3D GVE_TX_PKT_DESC_DTYPE_DQO, .end_of_packet =3D cur_eop, - .checksum_offload_enable =3D checksum_offload_en, + .checksum_offload_enable =3D enable_csum, .compl_tag =3D cpu_to_le16(compl_tag), .buf_size =3D cur_len, }; @@ -612,6 +618,25 @@ gve_tx_fill_general_ctx_desc(struct gve_tx_general_con= text_desc_dqo *desc, }; } =20 +static void gve_tx_update_tail(struct gve_tx_ring *tx, u32 desc_idx) +{ + u32 last_desc_idx =3D (desc_idx - 1) & tx->mask; + u32 last_report_event_interval =3D + (last_desc_idx - tx->dqo_tx.last_re_idx) & tx->mask; + + /* Commit the changes to our state */ + tx->dqo_tx.tail =3D desc_idx; + + /* Request a descriptor completion on the last descriptor of the + * packet if we are allowed to by the HW enforced interval. + */ + + if (unlikely(last_report_event_interval >=3D GVE_TX_MIN_RE_INTERVAL)) { + tx->dqo.tx_ring[last_desc_idx].pkt.report_event =3D true; + tx->dqo_tx.last_re_idx =3D last_desc_idx; + } +} + static int gve_tx_add_skb_no_copy_dqo(struct gve_tx_ring *tx, struct sk_buff *skb, struct gve_tx_pending_packet_dqo *pkt, @@ -619,6 +644,7 @@ static int gve_tx_add_skb_no_copy_dqo(struct gve_tx_rin= g *tx, u32 *desc_idx, bool is_gso) { + bool enable_csum =3D skb->ip_summed =3D=3D CHECKSUM_PARTIAL; const struct skb_shared_info *shinfo =3D skb_shinfo(skb); int i; =20 @@ -644,7 +670,7 @@ static int gve_tx_add_skb_no_copy_dqo(struct gve_tx_rin= g *tx, dma_unmap_addr_set(pkt, dma[pkt->num_bufs], addr); ++pkt->num_bufs; =20 - gve_tx_fill_pkt_desc_dqo(tx, desc_idx, skb, len, addr, + gve_tx_fill_pkt_desc_dqo(tx, desc_idx, enable_csum, len, addr, completion_tag, /*eop=3D*/shinfo->nr_frags =3D=3D 0, is_gso); } @@ -664,7 +690,7 @@ static int gve_tx_add_skb_no_copy_dqo(struct gve_tx_rin= g *tx, dma[pkt->num_bufs], addr); ++pkt->num_bufs; =20 - gve_tx_fill_pkt_desc_dqo(tx, desc_idx, skb, len, addr, + gve_tx_fill_pkt_desc_dqo(tx, desc_idx, enable_csum, len, addr, completion_tag, is_eop, is_gso); } =20 @@ -709,6 +735,7 @@ static int gve_tx_add_skb_copy_dqo(struct gve_tx_ring *= tx, u32 *desc_idx, bool is_gso) { + bool enable_csum =3D skb->ip_summed =3D=3D CHECKSUM_PARTIAL; u32 copy_offset =3D 0; dma_addr_t dma_addr; u32 copy_len; @@ -730,7 +757,7 @@ static int gve_tx_add_skb_copy_dqo(struct gve_tx_ring *= tx, copy_offset +=3D copy_len; dma_sync_single_for_device(tx->dev, dma_addr, copy_len, DMA_TO_DEVICE); - gve_tx_fill_pkt_desc_dqo(tx, desc_idx, skb, + gve_tx_fill_pkt_desc_dqo(tx, desc_idx, enable_csum, copy_len, dma_addr, completion_tag, @@ -800,24 +827,7 @@ static int gve_tx_add_skb_dqo(struct gve_tx_ring *tx, =20 tx->dqo_tx.posted_packet_desc_cnt +=3D pkt->num_bufs; =20 - /* Commit the changes to our state */ - tx->dqo_tx.tail =3D desc_idx; - - /* Request a descriptor completion on the last descriptor of the - * packet if we are allowed to by the HW enforced interval. - */ - { - u32 last_desc_idx =3D (desc_idx - 1) & tx->mask; - u32 last_report_event_interval =3D - (last_desc_idx - tx->dqo_tx.last_re_idx) & tx->mask; - - if (unlikely(last_report_event_interval >=3D - GVE_TX_MIN_RE_INTERVAL)) { - tx->dqo.tx_ring[last_desc_idx].pkt.report_event =3D true; - tx->dqo_tx.last_re_idx =3D last_desc_idx; - } - } - + gve_tx_update_tail(tx, desc_idx); return 0; =20 err: @@ -951,9 +961,8 @@ static int gve_try_tx_skb(struct gve_priv *priv, struct= gve_tx_ring *tx, =20 /* Metadata + (optional TSO) + data descriptors. */ total_num_descs =3D 1 + skb_is_gso(skb) + num_buffer_descs; - if (unlikely(gve_maybe_stop_tx_dqo(tx, total_num_descs + - GVE_TX_MIN_DESC_PREVENT_CACHE_OVERLAP, - num_buffer_descs))) { + if (unlikely(gve_maybe_stop_tx_dqo(tx, total_num_descs, + num_buffer_descs))) { return -1; } =20 --=20 2.50.0.rc2.761.g2dc52ea45b-goog From nobody Thu Oct 9 08:47:44 2025 Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C004B21FF4D for ; Wed, 18 Jun 2025 20:56:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750280184; cv=none; b=W3LStFMjOhDjLjDwQkeGzoVRdGJ6jRMmk5BhgGsJ834NBhASoeafjWWKJJhg3VEAY5yBtcP6dt1IxcZ96x4cKpu3ANAKhhOYzVJVTUHfk1LJsS/ZdK23Yehk9bF1AgSQxAdfv5wgM9R0ZC6LkFMXzW/Uxi2JU3s0qsalQI7UlGQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750280184; c=relaxed/simple; bh=sPufrTRob6bCxlzDBGKebkE9RWh4wT2NIx2gNwHAvdw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=q+0rcHrJKNeRIZDBlf7QesKO3p5DfTo1TMiDabUG2cSjtrB4LYK3+iBEFNSCYrm5frPM5D7frjyUxY1ks+Mlc7Cdu9JvuHadltNYZDWgHPpeE+PlWeSikEFnBqdw0k1/TIVmF5prjdmn389wB7jgJGEnprSCR/R/UhD4lDBc4QU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--hramamurthy.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=OXLLIdcM; arc=none smtp.client-ip=209.85.215.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--hramamurthy.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="OXLLIdcM" Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-b0e0c573531so77097a12.3 for ; Wed, 18 Jun 2025 13:56:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1750280182; x=1750884982; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=4DVQGjAEUNRl6E9b6KuNXb6Zgj9FbhXYzX0ArJg4zk8=; b=OXLLIdcM64sF0ONKY2a9HUNR4exUH6wWv9Y4p11CdRj9UlcJTlY8ecR0WPoo6RQanr VBh1uOt7epAWftMow2yNZaQF3CeWJTVQFvBJR3p8T9MlTaVZppn8oIbXvmM1eFUybjBY RMbUlGWE8VpELsTl8KNd5/e9RSkgtKxwF1Rl0L9jKZnxGGSM35ifj+LTaXuKye6JNpZ8 vFvG1IBp9NgsSO1f0R+hJcMbnsAIpwLRdgmcwPvse/ZnkPPfvdPkmmRapccEJtwFo3u+ h/QXy7i0oYq66xxLol/MAVYXIwMAZB1XIWfg0hRpvDZWmPBXZJjJPPjE+3W0J1AsPKI+ amFA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750280182; x=1750884982; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=4DVQGjAEUNRl6E9b6KuNXb6Zgj9FbhXYzX0ArJg4zk8=; b=tyYe71iZy3sxk089PZfuIqKpO7XUmJEyK6oJMxizfL3hsSTw11fwZa2tw9iXNFMzMH 0Qy5fON5MCDjv1c3MVfeJ7+Ja8ZkL0y4pxOxe8ITiz/OJM7c04COYq78jx/Rb2FmbUk9 I7yYqAzASGdBn1uXwlYroxnEqFu32L2oXjlxzuyXtsiQOEgkreyQdsGfwgqaz1Sr0r94 y7j0x8H2GrToQsFLiGaCLzb9tqe3yQcCJqD/BbXs9qDrLEzTNpv6fZo8XDRhlAIhUj+d 69smjzcecnli0xCV2UTpieZKXp5v0xRa5gasyy5zPLbLKrcC2V4Seejf2x/q9VjbHMmX FMDg== X-Forwarded-Encrypted: i=1; AJvYcCVXEVXETolMDkJ+OpGsqU9te8PpBzIrV1Wz/Ulsl5l0XEcrB9B64xizfLoY4mSb05Rgrl8CkR1zzw6vVGk=@vger.kernel.org X-Gm-Message-State: AOJu0Yx+XprIaOPgw6vGsi7uOZ3xZg8S0AksFN5DKN8Kl9FOeEmpSTqk DNK9Nu6RJ20eoFuweQD4FbtwaL5LOw25TdlnitWHkTEe8OvSnfP+TlVhced96k/oCK40mAeMANU tBxHnppUgNx+fXIrAIZQWU20coQ== X-Google-Smtp-Source: AGHT+IGojNIXWw+HaIud0fqe/k9295saEOdAelSgNunbrU0/I3QfsLHf930Pz8YdfdtnZIBd76oW++wEqe5G0Yr1Nw== X-Received: from pge23.prod.google.com ([2002:a05:6a02:2d17:b0:b2f:5023:640d]) (user=hramamurthy job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a20:158b:b0:21f:994e:7355 with SMTP id adf61e73a8af0-21fbd668cf5mr33349072637.36.1750280182015; Wed, 18 Jun 2025 13:56:22 -0700 (PDT) Date: Wed, 18 Jun 2025 20:56:13 +0000 In-Reply-To: <20250618205613.1432007-1-hramamurthy@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250618205613.1432007-1-hramamurthy@google.com> X-Mailer: git-send-email 2.50.0.714.g196bf9f422-goog Message-ID: <20250618205613.1432007-4-hramamurthy@google.com> Subject: [PATCH net-next 3/3] gve: add XDP_TX and XDP_REDIRECT support for DQ RDA From: Harshitha Ramamurthy To: netdev@vger.kernel.org Cc: jeroendb@google.com, hramamurthy@google.com, andrew+netdev@lunn.ch, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, ast@kernel.org, daniel@iogearbox.net, hawk@kernel.org, john.fastabend@gmail.com, sdf@fomichev.me, willemb@google.com, ziweixiao@google.com, pkaligineedi@google.com, joshwash@google.com, linux-kernel@vger.kernel.org, bpf@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Joshua Washington This patch adds support for XDP_TX and XDP_REDIRECT for the DQ RDA queue format. To appropriately support transmission of XDP frames, a new pending packet type GVE_TX_PENDING_PACKET_DQO_XDP_FRAME is introduced for completion handling, as there was a previous assumption that completed packets would be SKBs. XDP_TX handling completes the basic XDP actions, so the feature is recorded accordingly. This patch also enables the ndo_xdp_xmit callback allowing DQ to handle XDP_REDIRECT packets originating from another interface. The XDP spinlock is moved to common TX ring fields so that it can be used in both GQ and DQ. Originally, it was in a section which was mutually exclusive for GQ and DQ. In summary, 3 XDP features are exposed for the DQ RDA queue format: 1) NETDEV_XDP_ACT_BASIC 2) NETDEV_XDP_ACT_NDO_XMIT 3) NETDEV_XDP_ACT_REDIRECT Note that XDP and header-data split are mutually exclusive for the time being due to lack of multi-buffer XDP support. This patch does not add support for the DQ QPL format. That is to come in a future patch series. Reviewed-by: Willem de Bruijn Signed-off-by: Praveen Kaligineedi Signed-off-by: Joshua Washington Signed-off-by: Harshitha Ramamurthy --- drivers/net/ethernet/google/gve/gve.h | 23 ++- drivers/net/ethernet/google/gve/gve_dqo.h | 2 + drivers/net/ethernet/google/gve/gve_main.c | 34 ++++- drivers/net/ethernet/google/gve/gve_rx_dqo.c | 77 ++++++++-- drivers/net/ethernet/google/gve/gve_tx_dqo.c | 151 +++++++++++++++++-- 5 files changed, 254 insertions(+), 33 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/g= oogle/gve/gve.h index de1fc23c44f9..cf91195d5f39 100644 --- a/drivers/net/ethernet/google/gve/gve.h +++ b/drivers/net/ethernet/google/gve/gve.h @@ -402,8 +402,16 @@ enum gve_packet_state { GVE_PACKET_STATE_TIMED_OUT_COMPL, }; =20 +enum gve_tx_pending_packet_dqo_type { + GVE_TX_PENDING_PACKET_DQO_SKB, + GVE_TX_PENDING_PACKET_DQO_XDP_FRAME +}; + struct gve_tx_pending_packet_dqo { - struct sk_buff *skb; /* skb for this packet */ + union { + struct sk_buff *skb; + struct xdp_frame *xdpf; + }; =20 /* 0th element corresponds to the linear portion of `skb`, should be * unmapped with `dma_unmap_single`. @@ -433,7 +441,10 @@ struct gve_tx_pending_packet_dqo { /* Identifies the current state of the packet as defined in * `enum gve_packet_state`. */ - u8 state; + u8 state : 2; + + /* gve_tx_pending_packet_dqo_type */ + u8 type : 1; =20 /* If packet is an outstanding miss completion, then the packet is * freed if the corresponding re-injection completion is not received @@ -455,6 +466,9 @@ struct gve_tx_ring { =20 /* DQO fields. */ struct { + /* Spinlock for XDP tx traffic */ + spinlock_t xdp_lock; + /* Linked list of gve_tx_pending_packet_dqo. Index into * pending_packets, or -1 if empty. * @@ -1155,6 +1169,7 @@ static inline bool gve_supports_xdp_xmit(struct gve_p= riv *priv) { switch (priv->queue_format) { case GVE_GQI_QPL_FORMAT: + case GVE_DQO_RDA_FORMAT: return true; default: return false; @@ -1180,9 +1195,13 @@ void gve_free_queue_page_list(struct gve_priv *priv, netdev_tx_t gve_tx(struct sk_buff *skb, struct net_device *dev); int gve_xdp_xmit_gqi(struct net_device *dev, int n, struct xdp_frame **fra= mes, u32 flags); +int gve_xdp_xmit_dqo(struct net_device *dev, int n, struct xdp_frame **fra= mes, + u32 flags); int gve_xdp_xmit_one(struct gve_priv *priv, struct gve_tx_ring *tx, void *data, int len, void *frame_p); void gve_xdp_tx_flush(struct gve_priv *priv, u32 xdp_qid); +int gve_xdp_xmit_one_dqo(struct gve_priv *priv, struct gve_tx_ring *tx, + struct xdp_frame *xdpf); bool gve_tx_poll(struct gve_notify_block *block, int budget); bool gve_xdp_poll(struct gve_notify_block *block, int budget); int gve_xsk_tx_poll(struct gve_notify_block *block, int budget); diff --git a/drivers/net/ethernet/google/gve/gve_dqo.h b/drivers/net/ethern= et/google/gve/gve_dqo.h index e83773fb891f..bb278727f4d9 100644 --- a/drivers/net/ethernet/google/gve/gve_dqo.h +++ b/drivers/net/ethernet/google/gve/gve_dqo.h @@ -37,6 +37,7 @@ netdev_features_t gve_features_check_dqo(struct sk_buff *= skb, struct net_device *dev, netdev_features_t features); bool gve_tx_poll_dqo(struct gve_notify_block *block, bool do_clean); +bool gve_xdp_poll_dqo(struct gve_notify_block *block); int gve_rx_poll_dqo(struct gve_notify_block *block, int budget); int gve_tx_alloc_rings_dqo(struct gve_priv *priv, struct gve_tx_alloc_rings_cfg *cfg); @@ -60,6 +61,7 @@ int gve_clean_tx_done_dqo(struct gve_priv *priv, struct g= ve_tx_ring *tx, struct napi_struct *napi); void gve_rx_post_buffers_dqo(struct gve_rx_ring *rx); void gve_rx_write_doorbell_dqo(const struct gve_priv *priv, int queue_idx); +void gve_xdp_tx_flush_dqo(struct gve_priv *priv, u32 xdp_qid); =20 static inline void gve_tx_put_doorbell_dqo(const struct gve_priv *priv, diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ether= net/google/gve/gve_main.c index eff970124dba..27f97a1d2957 100644 --- a/drivers/net/ethernet/google/gve/gve_main.c +++ b/drivers/net/ethernet/google/gve/gve_main.c @@ -414,8 +414,12 @@ int gve_napi_poll_dqo(struct napi_struct *napi, int bu= dget) bool reschedule =3D false; int work_done =3D 0; =20 - if (block->tx) - reschedule |=3D gve_tx_poll_dqo(block, /*do_clean=3D*/true); + if (block->tx) { + if (block->tx->q_num < priv->tx_cfg.num_queues) + reschedule |=3D gve_tx_poll_dqo(block, /*do_clean=3D*/true); + else + reschedule |=3D gve_xdp_poll_dqo(block); + } =20 if (!budget) return 0; @@ -1521,8 +1525,11 @@ static int gve_xdp_xmit(struct net_device *dev, int = n, { struct gve_priv *priv =3D netdev_priv(dev); =20 - if (gve_is_gqi(priv)) + if (priv->queue_format =3D=3D GVE_GQI_QPL_FORMAT) return gve_xdp_xmit_gqi(dev, n, frames, flags); + else if (priv->queue_format =3D=3D GVE_DQO_RDA_FORMAT) + return gve_xdp_xmit_dqo(dev, n, frames, flags); + return -EOPNOTSUPP; } =20 @@ -1661,9 +1668,8 @@ static int verify_xdp_configuration(struct net_device= *dev) return -EOPNOTSUPP; } =20 - if (priv->queue_format !=3D GVE_GQI_QPL_FORMAT) { - netdev_warn(dev, "XDP is not supported in mode %d.\n", - priv->queue_format); + if (priv->header_split_enabled) { + netdev_warn(dev, "XDP is not supported when header-data split is enabled= .\n"); return -EOPNOTSUPP; } =20 @@ -1987,10 +1993,13 @@ u16 gve_get_pkt_buf_size(const struct gve_priv *pri= v, bool enable_hsplit) return GVE_DEFAULT_RX_BUFFER_SIZE; } =20 -/* header-split is not supported on non-DQO_RDA yet even if device adverti= ses it */ +/* Header split is only supported on DQ RDA queue format. If XDP is enable= d, + * header split is not allowed. + */ bool gve_header_split_supported(const struct gve_priv *priv) { - return priv->header_buf_size && priv->queue_format =3D=3D GVE_DQO_RDA_FOR= MAT; + return priv->header_buf_size && + priv->queue_format =3D=3D GVE_DQO_RDA_FORMAT && !priv->xdp_prog; } =20 int gve_set_hsplit_config(struct gve_priv *priv, u8 tcp_data_split) @@ -2039,6 +2048,12 @@ static int gve_set_features(struct net_device *netde= v, =20 if ((netdev->features & NETIF_F_LRO) !=3D (features & NETIF_F_LRO)) { netdev->features ^=3D NETIF_F_LRO; + if (priv->xdp_prog && (netdev->features & NETIF_F_LRO)) { + netdev_warn(netdev, + "XDP is not supported when LRO is on.\n"); + err =3D -EOPNOTSUPP; + goto revert_features; + } if (netif_running(netdev)) { err =3D gve_adjust_config(priv, &tx_alloc_cfg, &rx_alloc_cfg); if (err) @@ -2240,6 +2255,9 @@ static void gve_set_netdev_xdp_features(struct gve_pr= iv *priv) xdp_features =3D NETDEV_XDP_ACT_BASIC; xdp_features |=3D NETDEV_XDP_ACT_REDIRECT; xdp_features |=3D NETDEV_XDP_ACT_XSK_ZEROCOPY; + } else if (priv->queue_format =3D=3D GVE_DQO_RDA_FORMAT) { + xdp_features =3D NETDEV_XDP_ACT_BASIC; + xdp_features |=3D NETDEV_XDP_ACT_REDIRECT; } else { xdp_features =3D 0; } diff --git a/drivers/net/ethernet/google/gve/gve_rx_dqo.c b/drivers/net/eth= ernet/google/gve/gve_rx_dqo.c index 0be41a0cdd15..96743e1d80f3 100644 --- a/drivers/net/ethernet/google/gve/gve_rx_dqo.c +++ b/drivers/net/ethernet/google/gve/gve_rx_dqo.c @@ -8,6 +8,7 @@ #include "gve_dqo.h" #include "gve_adminq.h" #include "gve_utils.h" +#include #include #include #include @@ -570,27 +571,66 @@ static int gve_rx_append_frags(struct napi_struct *na= pi, return 0; } =20 +static int gve_xdp_tx_dqo(struct gve_priv *priv, struct gve_rx_ring *rx, + struct xdp_buff *xdp) +{ + struct gve_tx_ring *tx; + struct xdp_frame *xdpf; + u32 tx_qid; + int err; + + xdpf =3D xdp_convert_buff_to_frame(xdp); + if (unlikely(!xdpf)) + return -ENOSPC; + + tx_qid =3D gve_xdp_tx_queue_id(priv, rx->q_num); + tx =3D &priv->tx[tx_qid]; + spin_lock(&tx->dqo_tx.xdp_lock); + err =3D gve_xdp_xmit_one_dqo(priv, tx, xdpf); + spin_unlock(&tx->dqo_tx.xdp_lock); + + return err; +} + static void gve_xdp_done_dqo(struct gve_priv *priv, struct gve_rx_ring *rx, struct xdp_buff *xdp, struct bpf_prog *xprog, int xdp_act, struct gve_rx_buf_state_dqo *buf_state) { - u64_stats_update_begin(&rx->statss); + int err; switch (xdp_act) { case XDP_ABORTED: case XDP_DROP: default: - rx->xdp_actions[xdp_act]++; + gve_free_buffer(rx, buf_state); break; case XDP_TX: - rx->xdp_tx_errors++; + err =3D gve_xdp_tx_dqo(priv, rx, xdp); + if (unlikely(err)) + goto err; + gve_reuse_buffer(rx, buf_state); break; case XDP_REDIRECT: - rx->xdp_redirect_errors++; + err =3D xdp_do_redirect(priv->dev, xdp, xprog); + if (unlikely(err)) + goto err; + gve_reuse_buffer(rx, buf_state); break; } + u64_stats_update_begin(&rx->statss); + if ((u32)xdp_act < GVE_XDP_ACTIONS) + rx->xdp_actions[xdp_act]++; + u64_stats_update_end(&rx->statss); + return; +err: + u64_stats_update_begin(&rx->statss); + if (xdp_act =3D=3D XDP_TX) + rx->xdp_tx_errors++; + else if (xdp_act =3D=3D XDP_REDIRECT) + rx->xdp_redirect_errors++; u64_stats_update_end(&rx->statss); gve_free_buffer(rx, buf_state); + return; } =20 /* Returns 0 if descriptor is completed successfully. @@ -812,16 +852,27 @@ static int gve_rx_complete_skb(struct gve_rx_ring *rx= , struct napi_struct *napi, =20 int gve_rx_poll_dqo(struct gve_notify_block *block, int budget) { - struct napi_struct *napi =3D &block->napi; - netdev_features_t feat =3D napi->dev->features; - - struct gve_rx_ring *rx =3D block->rx; - struct gve_rx_compl_queue_dqo *complq =3D &rx->dqo.complq; - + struct gve_rx_compl_queue_dqo *complq; + struct napi_struct *napi; + netdev_features_t feat; + struct gve_rx_ring *rx; + struct gve_priv *priv; + u64 xdp_redirects; u32 work_done =3D 0; u64 bytes =3D 0; + u64 xdp_txs; int err; =20 + napi =3D &block->napi; + feat =3D napi->dev->features; + + rx =3D block->rx; + priv =3D rx->gve; + complq =3D &rx->dqo.complq; + + xdp_redirects =3D rx->xdp_actions[XDP_REDIRECT]; + xdp_txs =3D rx->xdp_actions[XDP_TX]; + while (work_done < budget) { struct gve_rx_compl_desc_dqo *compl_desc =3D &complq->desc_ring[complq->head]; @@ -895,6 +946,12 @@ int gve_rx_poll_dqo(struct gve_notify_block *block, in= t budget) rx->ctx.skb_tail =3D NULL; } =20 + if (xdp_txs !=3D rx->xdp_actions[XDP_TX]) + gve_xdp_tx_flush_dqo(priv, rx->q_num); + + if (xdp_redirects !=3D rx->xdp_actions[XDP_REDIRECT]) + xdp_do_flush(); + gve_rx_post_buffers_dqo(rx); =20 u64_stats_update_begin(&rx->statss); diff --git a/drivers/net/ethernet/google/gve/gve_tx_dqo.c b/drivers/net/eth= ernet/google/gve/gve_tx_dqo.c index ba6b5cdaa922..ce5370b741ec 100644 --- a/drivers/net/ethernet/google/gve/gve_tx_dqo.c +++ b/drivers/net/ethernet/google/gve/gve_tx_dqo.c @@ -9,6 +9,7 @@ #include "gve_utils.h" #include "gve_dqo.h" #include +#include #include #include #include @@ -110,6 +111,14 @@ static bool gve_has_pending_packet(struct gve_tx_ring = *tx) return false; } =20 +void gve_xdp_tx_flush_dqo(struct gve_priv *priv, u32 xdp_qid) +{ + u32 tx_qid =3D gve_xdp_tx_queue_id(priv, xdp_qid); + struct gve_tx_ring *tx =3D &priv->tx[tx_qid]; + + gve_tx_put_doorbell_dqo(priv, tx->q_resources, tx->dqo_tx.tail); +} + static struct gve_tx_pending_packet_dqo * gve_alloc_pending_packet(struct gve_tx_ring *tx) { @@ -198,7 +207,8 @@ void gve_tx_stop_ring_dqo(struct gve_priv *priv, int id= x) =20 gve_remove_napi(priv, ntfy_idx); gve_clean_tx_done_dqo(priv, tx, /*napi=3D*/NULL); - netdev_tx_reset_queue(tx->netdev_txq); + if (tx->netdev_txq) + netdev_tx_reset_queue(tx->netdev_txq); gve_tx_clean_pending_packets(tx); gve_tx_remove_from_block(priv, idx); } @@ -276,7 +286,8 @@ void gve_tx_start_ring_dqo(struct gve_priv *priv, int i= dx) =20 gve_tx_add_to_block(priv, idx); =20 - tx->netdev_txq =3D netdev_get_tx_queue(priv->dev, idx); + if (idx < priv->tx_cfg.num_queues) + tx->netdev_txq =3D netdev_get_tx_queue(priv->dev, idx); gve_add_napi(priv, ntfy_idx, gve_napi_poll_dqo); } =20 @@ -295,6 +306,7 @@ static int gve_tx_alloc_ring_dqo(struct gve_priv *priv, memset(tx, 0, sizeof(*tx)); tx->q_num =3D idx; tx->dev =3D hdev; + spin_lock_init(&tx->dqo_tx.xdp_lock); atomic_set_release(&tx->dqo_compl.hw_tx_head, 0); =20 /* Queue sizes must be a power of 2 */ @@ -795,6 +807,7 @@ static int gve_tx_add_skb_dqo(struct gve_tx_ring *tx, return -ENOMEM; =20 pkt->skb =3D skb; + pkt->type =3D GVE_TX_PENDING_PACKET_DQO_SKB; completion_tag =3D pkt - tx->dqo.pending_packets; =20 gve_extract_tx_metadata_dqo(skb, &metadata); @@ -1116,16 +1129,32 @@ static void gve_handle_packet_completion(struct gve= _priv *priv, } } tx->dqo_tx.completed_packet_desc_cnt +=3D pending_packet->num_bufs; - if (tx->dqo.qpl) - gve_free_tx_qpl_bufs(tx, pending_packet); - else + + switch (pending_packet->type) { + case GVE_TX_PENDING_PACKET_DQO_SKB: + if (tx->dqo.qpl) + gve_free_tx_qpl_bufs(tx, pending_packet); + else + gve_unmap_packet(tx->dev, pending_packet); + (*pkts)++; + *bytes +=3D pending_packet->skb->len; + + napi_consume_skb(pending_packet->skb, is_napi); + pending_packet->skb =3D NULL; + gve_free_pending_packet(tx, pending_packet); + break; + case GVE_TX_PENDING_PACKET_DQO_XDP_FRAME: gve_unmap_packet(tx->dev, pending_packet); + (*pkts)++; + *bytes +=3D pending_packet->xdpf->len; =20 - *bytes +=3D pending_packet->skb->len; - (*pkts)++; - napi_consume_skb(pending_packet->skb, is_napi); - pending_packet->skb =3D NULL; - gve_free_pending_packet(tx, pending_packet); + xdp_return_frame(pending_packet->xdpf); + pending_packet->xdpf =3D NULL; + gve_free_pending_packet(tx, pending_packet); + break; + default: + WARN_ON_ONCE(1); + } } =20 static void gve_handle_miss_completion(struct gve_priv *priv, @@ -1296,9 +1325,10 @@ int gve_clean_tx_done_dqo(struct gve_priv *priv, str= uct gve_tx_ring *tx, num_descs_cleaned++; } =20 - netdev_tx_completed_queue(tx->netdev_txq, - pkt_compl_pkts + miss_compl_pkts, - pkt_compl_bytes + miss_compl_bytes); + if (tx->netdev_txq) + netdev_tx_completed_queue(tx->netdev_txq, + pkt_compl_pkts + miss_compl_pkts, + pkt_compl_bytes + miss_compl_bytes); =20 remove_miss_completions(priv, tx); remove_timed_out_completions(priv, tx); @@ -1334,3 +1364,98 @@ bool gve_tx_poll_dqo(struct gve_notify_block *block,= bool do_clean) compl_desc =3D &tx->dqo.compl_ring[tx->dqo_compl.head]; return compl_desc->generation !=3D tx->dqo_compl.cur_gen_bit; } + +bool gve_xdp_poll_dqo(struct gve_notify_block *block) +{ + struct gve_tx_compl_desc *compl_desc; + struct gve_tx_ring *tx =3D block->tx; + struct gve_priv *priv =3D block->priv; + + gve_clean_tx_done_dqo(priv, tx, &block->napi); + + /* Return true if we still have work. */ + compl_desc =3D &tx->dqo.compl_ring[tx->dqo_compl.head]; + return compl_desc->generation !=3D tx->dqo_compl.cur_gen_bit; +} + +int gve_xdp_xmit_one_dqo(struct gve_priv *priv, struct gve_tx_ring *tx, + struct xdp_frame *xdpf) +{ + struct gve_tx_pending_packet_dqo *pkt; + u32 desc_idx =3D tx->dqo_tx.tail; + s16 completion_tag; + int num_descs =3D 1; + dma_addr_t addr; + int err; + + if (unlikely(!gve_has_tx_slots_available(tx, num_descs))) + return -EBUSY; + + pkt =3D gve_alloc_pending_packet(tx); + if (unlikely(!pkt)) + return -EBUSY; + + pkt->type =3D GVE_TX_PENDING_PACKET_DQO_XDP_FRAME; + pkt->num_bufs =3D 0; + pkt->xdpf =3D xdpf; + completion_tag =3D pkt - tx->dqo.pending_packets; + + /* Generate Packet Descriptor */ + addr =3D dma_map_single(tx->dev, xdpf->data, xdpf->len, DMA_TO_DEVICE); + err =3D dma_mapping_error(tx->dev, addr); + if (unlikely(err)) + goto err; + + dma_unmap_len_set(pkt, len[pkt->num_bufs], xdpf->len); + dma_unmap_addr_set(pkt, dma[pkt->num_bufs], addr); + pkt->num_bufs++; + + gve_tx_fill_pkt_desc_dqo(tx, &desc_idx, + false, xdpf->len, + addr, completion_tag, true, + false); + + gve_tx_update_tail(tx, desc_idx); + return 0; + +err: + pkt->xdpf =3D NULL; + pkt->num_bufs =3D 0; + gve_free_pending_packet(tx, pkt); + return err; +} + +int gve_xdp_xmit_dqo(struct net_device *dev, int n, struct xdp_frame **fra= mes, + u32 flags) +{ + struct gve_priv *priv =3D netdev_priv(dev); + struct gve_tx_ring *tx; + int i, err =3D 0, qid; + + if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK)) + return -EINVAL; + + qid =3D gve_xdp_tx_queue_id(priv, + smp_processor_id() % priv->tx_cfg.num_xdp_queues); + + tx =3D &priv->tx[qid]; + + spin_lock(&tx->dqo_tx.xdp_lock); + for (i =3D 0; i < n; i++) { + err =3D gve_xdp_xmit_one_dqo(priv, tx, frames[i]); + if (err) + break; + } + + if (flags & XDP_XMIT_FLUSH) + gve_tx_put_doorbell_dqo(priv, tx->q_resources, tx->dqo_tx.tail); + + spin_unlock(&tx->dqo_tx.xdp_lock); + + u64_stats_update_begin(&tx->statss); + tx->xdp_xmit +=3D n; + tx->xdp_xmit_errors +=3D n - i; + u64_stats_update_end(&tx->statss); + + return i ? i : err; +} --=20 2.50.0.rc2.761.g2dc52ea45b-goog