From nobody Sun Feb 8 18:28:29 2026 Received: from mail-wm1-f42.google.com (mail-wm1-f42.google.com [209.85.128.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E4B4456B77 for ; Thu, 28 Mar 2024 09:26:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.42 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711618014; cv=none; b=TDoZXydKNYcEqk2SanyWIfQZkgE18Ty4HN4HY2I7sehh/sDsXBfgsLG0TBqzJ/QUFqCbow0tW0tDsaimMB77+bA417PJrGnTWZawPVuD7CZAez2HySro/4qkFe9Zvkj4boanVL7e9pKYLw7PrAE2OBRVi2eBVbWlKTgQxF47ckM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711618014; c=relaxed/simple; bh=jlmDA8j5BPsGTQCQEi11/ZrPzSbTV7KDqXuvMXnrWBo=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=AiBo6qSJnkd6W4F70EKXFlmJSxOBoyAQERk+x/D+g5b73ZapfffNBC7ff6HX/rDJswJsnGXNDd196F9IFgnjLKugciX2LdNicmy3Y833vwhtjkvFGHXUZd2UXV4kRE4KK6m1d3C8OVCDhRgqm4kx+UmrtS/pXP2SVsRESO8Mjfg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=baylibre.com; spf=pass smtp.mailfrom=baylibre.com; dkim=pass (2048-bit key) header.d=baylibre-com.20230601.gappssmtp.com header.i=@baylibre-com.20230601.gappssmtp.com header.b=aaU0zX3J; arc=none smtp.client-ip=209.85.128.42 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=baylibre.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=baylibre.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=baylibre-com.20230601.gappssmtp.com header.i=@baylibre-com.20230601.gappssmtp.com header.b="aaU0zX3J" Received: by mail-wm1-f42.google.com with SMTP id 5b1f17b1804b1-4154471fb59so2925495e9.3 for ; Thu, 28 Mar 2024 02:26:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baylibre-com.20230601.gappssmtp.com; s=20230601; t=1711618010; x=1712222810; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=qEaZoF9mx7Xkycbcd21LmA2/00TCeHKVbh1e2Dzr7Vc=; b=aaU0zX3JWGUS1Az17VyjnGrFVQn3u4TEqx/hkcCRa0+x5Sb8+EehhQbM+rrJ2ZN7lf cYCvP5b7yn7SC+u1ub4ZqkvgquvpfRYY7YcSjK4YpMA2DMOmAi1o2wHmpIlt56w2SmIF 0N2lbqlTadx0xGhUlC2dPzu1ZE5u2VTz5XpyfkqaTzD2yMhDLKNatVFBHNQ7EHrgLDFN NjPN4Ix3xChlpf517rhauIPjJw7dknJ1pTsIZiG47BowpdzTb8WQRAFIbZoSZZekkqYb EVftbEB1FlH1H9VdcPuc+IZU+f7gLaFC+YVv6vRXp09khB58aczMRFhdtzYxNZdIjw6D kPGg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711618010; x=1712222810; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=qEaZoF9mx7Xkycbcd21LmA2/00TCeHKVbh1e2Dzr7Vc=; b=k9NPniDMKzbP0Xmd4mUwIpLbCd2vyRrCZqtDVAY78wfu/9xi7JNtXgKt5gp/JW5dGY gZ8+b3oU/ovPY0pij5CtAIQV/mX2gHfKQyUzeYNi+m8T0nfZywvl66gsEUvxs67WzdoK 5Y49kNVHALioMXAMfmpOOPtKvhOti7DDG5g+GHWCCJ1YHCpQKqYCsDyX4TTUVPw48K3R V+8SV/PMxEsjNJaZunjr+hFQFV/nwLGW/Z5k7TBDMKOxvep5LtHHLtl29qza5Og4T5Zy oDcc1pMH1Nv+Jg9m2TRMo7ACnmelTr1bJBwyz5cP2fWXLHMHy9FdjVgryelMe9v93VY0 fQEw== X-Forwarded-Encrypted: i=1; AJvYcCWRQLwwUeaA2BXvDPZWO7Kaq80Q5MpurL2xFb8o+bsmy6ZJhyjPoV6R+RtDGzwefMbzPwjHF0HZluFjAnN91nWh3M6/iQacMSOH5xAk X-Gm-Message-State: AOJu0YxMbVIWt0dEOYAshvXoHivHdNJ+35V4uTGYPvENANYR/8Ie7kIA ncXm9qJHJeUsl+ZfFM3bCOKeGJtkSWQYWjBPTFxmeZ8FxDgjYcSjjxI+DzKHDwQ= X-Google-Smtp-Source: AGHT+IECjUgDa2k5UJylRjoSYSZ/eL8Rc0qvDSUr6znA15Wk0chOgtuTksxOram+qKPcSmAtYopzIw== X-Received: by 2002:a05:600c:1c22:b0:415:456c:a17f with SMTP id j34-20020a05600c1c2200b00415456ca17fmr740302wms.25.1711618010528; Thu, 28 Mar 2024 02:26:50 -0700 (PDT) Received: from [127.0.1.1] (laubervilliers-658-1-213-31.w90-63.abo.wanadoo.fr. [90.63.244.31]) by smtp.gmail.com with ESMTPSA id bd17-20020a05600c1f1100b0041493615585sm1673414wmb.39.2024.03.28.02.26.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 28 Mar 2024 02:26:49 -0700 (PDT) From: Julien Panis Date: Thu, 28 Mar 2024 10:26:40 +0100 Subject: [PATCH net-next v5 1/3] net: ethernet: ti: Add accessors for struct k3_cppi_desc_pool members Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20240223-am65-cpsw-xdp-basic-v5-1-bc1739170bc6@baylibre.com> References: <20240223-am65-cpsw-xdp-basic-v5-0-bc1739170bc6@baylibre.com> In-Reply-To: <20240223-am65-cpsw-xdp-basic-v5-0-bc1739170bc6@baylibre.com> To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Russell King , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , Simon Horman , Andrew Lunn , Ratheesh Kannoth Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, Julien Panis X-Mailer: b4 0.12.2 X-Developer-Signature: v=1; a=ed25519-sha256; t=1711618007; l=1852; i=jpanis@baylibre.com; s=20230526; h=from:subject:message-id; bh=jlmDA8j5BPsGTQCQEi11/ZrPzSbTV7KDqXuvMXnrWBo=; b=4EO5M/QaSlfy71TX2T3KVkjA4QfRBdI4jxi2392RsNefCveULSjjFyy1hcKDNfud58SeMzlmm 4oAew9FdeG+ATsP7Gv8bowf1UYRB0QwYtNNMn1BRDMYC17kQteIfwZ0 X-Developer-Key: i=jpanis@baylibre.com; a=ed25519; pk=8eSM4/xkiHWz2M1Cw1U3m2/YfPbsUdEJPCWY3Mh9ekQ= This patch adds accessors for desc_size and cpumem members. They may be used, for instance, to compute a descriptor index. Signed-off-by: Julien Panis --- drivers/net/ethernet/ti/k3-cppi-desc-pool.c | 12 ++++++++++++ drivers/net/ethernet/ti/k3-cppi-desc-pool.h | 2 ++ 2 files changed, 14 insertions(+) diff --git a/drivers/net/ethernet/ti/k3-cppi-desc-pool.c b/drivers/net/ethe= rnet/ti/k3-cppi-desc-pool.c index 05cc7aab1ec8..fe8203c05731 100644 --- a/drivers/net/ethernet/ti/k3-cppi-desc-pool.c +++ b/drivers/net/ethernet/ti/k3-cppi-desc-pool.c @@ -132,5 +132,17 @@ size_t k3_cppi_desc_pool_avail(struct k3_cppi_desc_poo= l *pool) } EXPORT_SYMBOL_GPL(k3_cppi_desc_pool_avail); =20 +size_t k3_cppi_desc_pool_desc_size(struct k3_cppi_desc_pool *pool) +{ + return pool->desc_size; +} +EXPORT_SYMBOL_GPL(k3_cppi_desc_pool_desc_size); + +void *k3_cppi_desc_pool_cpuaddr(struct k3_cppi_desc_pool *pool) +{ + return pool->cpumem; +} +EXPORT_SYMBOL_GPL(k3_cppi_desc_pool_cpuaddr); + MODULE_LICENSE("GPL"); MODULE_DESCRIPTION("TI K3 CPPI5 descriptors pool API"); diff --git a/drivers/net/ethernet/ti/k3-cppi-desc-pool.h b/drivers/net/ethe= rnet/ti/k3-cppi-desc-pool.h index a7e3fa5e7b62..149d5579a5e2 100644 --- a/drivers/net/ethernet/ti/k3-cppi-desc-pool.h +++ b/drivers/net/ethernet/ti/k3-cppi-desc-pool.h @@ -26,5 +26,7 @@ k3_cppi_desc_pool_dma2virt(struct k3_cppi_desc_pool *pool= , dma_addr_t dma); void *k3_cppi_desc_pool_alloc(struct k3_cppi_desc_pool *pool); void k3_cppi_desc_pool_free(struct k3_cppi_desc_pool *pool, void *addr); size_t k3_cppi_desc_pool_avail(struct k3_cppi_desc_pool *pool); +size_t k3_cppi_desc_pool_desc_size(struct k3_cppi_desc_pool *pool); +void *k3_cppi_desc_pool_cpuaddr(struct k3_cppi_desc_pool *pool); =20 #endif /* K3_CPPI_DESC_POOL_H_ */ --=20 2.37.3 From nobody Sun Feb 8 18:28:29 2026 Received: from mail-wm1-f49.google.com (mail-wm1-f49.google.com [209.85.128.49]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 296655FB84 for ; Thu, 28 Mar 2024 09:26:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.49 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711618015; cv=none; b=bl8JH4uwOiCgdY8x3KMsiBeZZ95rtrXvRmUgRKeyCJ+kzXkzpVry4eAdTmrxJj/bX9vs8+FBhIM3VstJNeMhsK1grqc08OZkSRy64uY4D+xQ8g3I3ZBNX49i5jdXBd1cEtHHmcndSFZvE7jOYfefeRgghhOtLzodw9wgBnT9miQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711618015; c=relaxed/simple; bh=lMQ8pA3fxuEkja51W9cCi6a5eswxznP9XZsEaDtG+eI=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=tw2FSR7OQ+G+XlSvqnBawPZoaRQgRodu1Mpm4JDUgtjvfmcfaR2cGQCGNDfysG2iBGr1N6bwY/U+8ZkT8zQD0IkMz+eFIpcYbOzAUlhKc4vUgwSKzbBjegKEy2Oz9gNieAoxEscn4nnAa1QfT1z1F7KWfwqvfSHrrzReZ2OiDxY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=baylibre.com; spf=pass smtp.mailfrom=baylibre.com; dkim=pass (2048-bit key) header.d=baylibre-com.20230601.gappssmtp.com header.i=@baylibre-com.20230601.gappssmtp.com header.b=lZFSGeQP; arc=none smtp.client-ip=209.85.128.49 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=baylibre.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=baylibre.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=baylibre-com.20230601.gappssmtp.com header.i=@baylibre-com.20230601.gappssmtp.com header.b="lZFSGeQP" Received: by mail-wm1-f49.google.com with SMTP id 5b1f17b1804b1-4149749cc36so5378845e9.0 for ; Thu, 28 Mar 2024 02:26:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baylibre-com.20230601.gappssmtp.com; s=20230601; t=1711618011; x=1712222811; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=BKRk5m7OhH9/KtmWyiJBEEYBkFQcqTKGy/vJ+JRc7nU=; b=lZFSGeQP2Iv1fpE3Y3XmRbn+L4s0YIE2M5X+TuyoZbNgstWBcI8IJAdaMsHye4JspU BTCcwfrapnpgRshvA2MVsrKzb5PbBdraDm/OFxopGUPW+Z4VAJDzwxiWctkBbZBz/7Fu LuJ+UIwGk/cDTnNd+CPrkQWetUnScb7XWxejEZgne/NvjfJ4+/R2kZuUa1BJUAzlvcKV YovMWyEIeZJ/h2fYTEfGb9hlYfISGcA7zYJ7R5H5/NaZU03558u3DLhhgp/TDKWzgTRU CFUNk5GjYmAMO/QS7vaml9hx8C4aOspETyRgif4aiTQvyYaiZPcs70rBRZKsI1QNtXlh Bf1g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711618011; x=1712222811; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=BKRk5m7OhH9/KtmWyiJBEEYBkFQcqTKGy/vJ+JRc7nU=; b=iaAmj2unV1HJDFUenNYbZFurJw+X2LT57joLZTuBpusjB6bNbaGaA+I4lhwIQMhO+5 26e3N3iaJ6awQpR+syoMNpYPjohALxagpezCVrcz+kyHqwQGd+MMqdPEiDAqEoLbGJHo HnFfh4jQ16zKx1NaaqtBPzjnp8LCnfSulz3YpSzyBIorfOWjEVHqN0gnMKUzlvmtqO68 VqKsPReTfzO1FZP60d+gJFsEP0FodIW1DQhJWSJQkwsVrXZFZgwiOLkZiVgivEVC2JiJ NR+mB5ZfeNlfKqNoqN90di8wWLelCnqhe6A/lprXjkXPCY0b8lcb7ScIk71Rt2vgIDzc Kttw== X-Forwarded-Encrypted: i=1; AJvYcCWuwgtBQKM7WlH/cOimmhsWK8kAHtBVDvUsJESU+w13GGsBxAebDC4ygN9Z8CfHxa3OC8Om86ovaO7OQhTBLWFE33Hy2zayxMamWOg5 X-Gm-Message-State: AOJu0Yy/Yv1ZkbPCzLPk2nkGLhFOXzAhVE5dE1GgrgJISxeeCsMwmN7v n1CJjC+hy/HQNjPkwROjk7zrhe/Z3Y/4YNNiOkVotyVap1JEhKWUUOEKX0fQNew= X-Google-Smtp-Source: AGHT+IFvF/soOKX1UFXsXd+ZNDuB+GCdNlZaP/qcn+7zX7JeZmLa6IzBcyJz9xomQtMqZbCoyhJyow== X-Received: by 2002:a05:600c:468a:b0:414:c42:e114 with SMTP id p10-20020a05600c468a00b004140c42e114mr1833632wmo.39.1711618011535; Thu, 28 Mar 2024 02:26:51 -0700 (PDT) Received: from [127.0.1.1] (laubervilliers-658-1-213-31.w90-63.abo.wanadoo.fr. [90.63.244.31]) by smtp.gmail.com with ESMTPSA id bd17-20020a05600c1f1100b0041493615585sm1673414wmb.39.2024.03.28.02.26.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 28 Mar 2024 02:26:51 -0700 (PDT) From: Julien Panis Date: Thu, 28 Mar 2024 10:26:41 +0100 Subject: [PATCH net-next v5 2/3] net: ethernet: ti: Add desc_infos member to struct k3_cppi_desc_pool Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20240223-am65-cpsw-xdp-basic-v5-2-bc1739170bc6@baylibre.com> References: <20240223-am65-cpsw-xdp-basic-v5-0-bc1739170bc6@baylibre.com> In-Reply-To: <20240223-am65-cpsw-xdp-basic-v5-0-bc1739170bc6@baylibre.com> To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Russell King , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , Simon Horman , Andrew Lunn , Ratheesh Kannoth Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, Julien Panis X-Mailer: b4 0.12.2 X-Developer-Signature: v=1; a=ed25519-sha256; t=1711618007; l=3504; i=jpanis@baylibre.com; s=20230526; h=from:subject:message-id; bh=lMQ8pA3fxuEkja51W9cCi6a5eswxznP9XZsEaDtG+eI=; b=XU+OQCK6D3jAL0Gw6r7XOd0wzjY4dYWszKZDVsf+OII0+m1hGaHYxbIXhdZbkLAZ50v6abGZh MY5e4w8ZhECDc5TYqX/ZD2MMGUvXisdXK3ITF04z5c6pUJ36/XTYX1G X-Developer-Key: i=jpanis@baylibre.com; a=ed25519; pk=8eSM4/xkiHWz2M1Cw1U3m2/YfPbsUdEJPCWY3Mh9ekQ= This patch introduces a member and the related accessors which can be used to store descriptor specific additional information. This member can store, for instance, an ID to differentiate a skb TX buffer type from a xdpf TX buffer type. Signed-off-by: Julien Panis --- drivers/net/ethernet/ti/k3-cppi-desc-pool.c | 25 +++++++++++++++++++++++++ drivers/net/ethernet/ti/k3-cppi-desc-pool.h | 2 ++ 2 files changed, 27 insertions(+) diff --git a/drivers/net/ethernet/ti/k3-cppi-desc-pool.c b/drivers/net/ethe= rnet/ti/k3-cppi-desc-pool.c index fe8203c05731..bb42bdf7c13d 100644 --- a/drivers/net/ethernet/ti/k3-cppi-desc-pool.c +++ b/drivers/net/ethernet/ti/k3-cppi-desc-pool.c @@ -22,6 +22,7 @@ struct k3_cppi_desc_pool { size_t mem_size; size_t num_desc; struct gen_pool *gen_pool; + void **desc_infos; }; =20 void k3_cppi_desc_pool_destroy(struct k3_cppi_desc_pool *pool) @@ -37,6 +38,8 @@ void k3_cppi_desc_pool_destroy(struct k3_cppi_desc_pool *= pool) dma_free_coherent(pool->dev, pool->mem_size, pool->cpumem, pool->dma_addr); =20 + kfree(pool->desc_infos); + gen_pool_destroy(pool->gen_pool); /* frees pool->name */ } EXPORT_SYMBOL_GPL(k3_cppi_desc_pool_destroy); @@ -72,6 +75,14 @@ k3_cppi_desc_pool_create_name(struct device *dev, size_t= size, goto gen_pool_create_fail; } =20 + pool->desc_infos =3D kcalloc(pool->num_desc, sizeof(*pool->desc_infos), G= FP_KERNEL); + if (!pool->desc_infos) { + ret =3D -ENOMEM; + dev_err(pool->dev, "pool descriptor infos alloc failed %d\n", ret); + kfree_const(pool_name); + goto gen_pool_desc_infos_alloc_fail; + } + pool->gen_pool->name =3D pool_name; =20 pool->cpumem =3D dma_alloc_coherent(pool->dev, pool->mem_size, @@ -94,6 +105,8 @@ k3_cppi_desc_pool_create_name(struct device *dev, size_t= size, dma_free_coherent(pool->dev, pool->mem_size, pool->cpumem, pool->dma_addr); dma_alloc_fail: + kfree(pool->desc_infos); +gen_pool_desc_infos_alloc_fail: gen_pool_destroy(pool->gen_pool); /* frees pool->name */ gen_pool_create_fail: devm_kfree(pool->dev, pool); @@ -144,5 +157,17 @@ void *k3_cppi_desc_pool_cpuaddr(struct k3_cppi_desc_po= ol *pool) } EXPORT_SYMBOL_GPL(k3_cppi_desc_pool_cpuaddr); =20 +void k3_cppi_desc_pool_desc_info_set(struct k3_cppi_desc_pool *pool, int d= esc_idx, void *info) +{ + pool->desc_infos[desc_idx] =3D info; +} +EXPORT_SYMBOL_GPL(k3_cppi_desc_pool_desc_info_set); + +void *k3_cppi_desc_pool_desc_info(struct k3_cppi_desc_pool *pool, int desc= _idx) +{ + return pool->desc_infos[desc_idx]; +} +EXPORT_SYMBOL_GPL(k3_cppi_desc_pool_desc_info); + MODULE_LICENSE("GPL"); MODULE_DESCRIPTION("TI K3 CPPI5 descriptors pool API"); diff --git a/drivers/net/ethernet/ti/k3-cppi-desc-pool.h b/drivers/net/ethe= rnet/ti/k3-cppi-desc-pool.h index 149d5579a5e2..0076596307e7 100644 --- a/drivers/net/ethernet/ti/k3-cppi-desc-pool.h +++ b/drivers/net/ethernet/ti/k3-cppi-desc-pool.h @@ -28,5 +28,7 @@ void k3_cppi_desc_pool_free(struct k3_cppi_desc_pool *poo= l, void *addr); size_t k3_cppi_desc_pool_avail(struct k3_cppi_desc_pool *pool); size_t k3_cppi_desc_pool_desc_size(struct k3_cppi_desc_pool *pool); void *k3_cppi_desc_pool_cpuaddr(struct k3_cppi_desc_pool *pool); +void k3_cppi_desc_pool_desc_info_set(struct k3_cppi_desc_pool *pool, int d= esc_idx, void *info); +void *k3_cppi_desc_pool_desc_info(struct k3_cppi_desc_pool *pool, int desc= _idx); =20 #endif /* K3_CPPI_DESC_POOL_H_ */ --=20 2.37.3 From nobody Sun Feb 8 18:28:29 2026 Received: from mail-wm1-f53.google.com (mail-wm1-f53.google.com [209.85.128.53]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 614D55F544 for ; Thu, 28 Mar 2024 09:26:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.53 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711618018; cv=none; b=IO1ep8MmtOppNhgA7hpyuBh4mf+/mEJcgMTqxHeaEOm8Rzsu0G1hK3hKIXz2EUr5ONUbwzftFroPe+Uofatv/pn0jY8wprLoqBdBl3897+5rZO0zsUKoEJKOYxBwvPWJlluMwiWXB37Pb4mTIHvZoMOADL0aP+0uWxllzm0DBDI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711618018; c=relaxed/simple; bh=zgp5eoEE6NS70zq98VkGa801JhOcNVJkgMJXMQwWaHY=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=rQxuOShxG7V0BOvnWgM4El3hyrj1uQmCFpxiBdnCSzyiR7UD6L9ESZZQKRfI+TfzpAJ37v7Y/ZvVpUs12niGatQsfKrWCZan6LcguSRWMw2fcXrDxaRmt3P8hA1wpHYkGzWaQzWykzgB43S+e7CbhF1KOBtRJL7hHNPW64KP0bg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=baylibre.com; spf=pass smtp.mailfrom=baylibre.com; dkim=pass (2048-bit key) header.d=baylibre-com.20230601.gappssmtp.com header.i=@baylibre-com.20230601.gappssmtp.com header.b=PnKNHPto; arc=none smtp.client-ip=209.85.128.53 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=baylibre.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=baylibre.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=baylibre-com.20230601.gappssmtp.com header.i=@baylibre-com.20230601.gappssmtp.com header.b="PnKNHPto" Received: by mail-wm1-f53.google.com with SMTP id 5b1f17b1804b1-41495dcea8eso4955505e9.3 for ; Thu, 28 Mar 2024 02:26:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=baylibre-com.20230601.gappssmtp.com; s=20230601; t=1711618013; x=1712222813; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=bQzmXawPHrL/jWH0XJLzZHMFfxBT3Z+jCor86BifA38=; b=PnKNHPtoCpok7aI2s38Uu9MYRfRtfrhHussIUzl84hyQ0dPCowwXjI9C58wnhNHYem XH8SkiNQ1+h7OTz7MOKai9AtKkTWynz07gqiPwc4I0SWQPhegc1ET2oJ9rfbthjilgMd go5yTOHCNjWy4J86bPPhFayFEOxYUSiga/wGd6Wy8hA/6f5tuMhaVgUGg9qUefQ0TM9X mu3FKSAvt93KMZpRJMJRPB5TCiHXrXb6cx1nnxCXMp172PfAp2jilRjmnjiMrYtzVc0Z J3GaNRCdYzH2ngU61JnSfgWtTR9DHqp9/+3fhlVK3bTbR6puJjgKVg/uoJsB4an13YwD JTRQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711618013; x=1712222813; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=bQzmXawPHrL/jWH0XJLzZHMFfxBT3Z+jCor86BifA38=; b=hpn94ZayoTpIMMU29OUhsD4MjYPRMucY6fDqPS8U/m0VgCg4RiEzVCBG7KMDBSAcSQ wTrXpvYID8h1QMnZDpHYqYPpPSgI/IywIbIO+8EFFm66VZj5aSJBcmnPZnj+tifJ/GnH SFOsTyizWWq8ya24yc469W5FoLebXl4d5bKNYXqtb2+z46vLjXt+1Egvaj6SjylGBLK+ 0aoSIipXBJk6tK+wCvuSJBULOy9TbtFXEv1cFa0j9NFmy3wEesH5Rw7w6rqZkR1hO5ko bCGnrl/aymzCfaJwStx0UBPWzeO4GZIfPS0gZQWiQHI9jw+Sge6ulvubLsaTmjGriMVM Ke+w== X-Forwarded-Encrypted: i=1; AJvYcCXZ4oIuXZXhG08Ox2nqL+zhOVFWgxBrDCQlM/f+g78PWqczI4l4W3eswQd27KPo9Ao7lIKWfy5Qi7u9Q0ceW1tvj9K6lrxLqPUo6hm6 X-Gm-Message-State: AOJu0YycTo9jEcfyFZfNeYW5JrSiOmucD5uJoBzdXCetP/iWT9mLvKmO QcLA4aGvYyP2mHfkEu+FdDYTRZF+AMwQBE5ctlE9Z5QnBv5k1rFgk+4wLIJRYD8= X-Google-Smtp-Source: AGHT+IFlINXEgrADh8euJ3pc/pXSeXpBDPcq5iNaaSg6PCl1pBemsXQ6lFkOiTV9LJEIf8NwqG/u5Q== X-Received: by 2002:a05:600c:4f89:b0:414:3713:e9a2 with SMTP id n9-20020a05600c4f8900b004143713e9a2mr2004302wmq.3.1711618012763; Thu, 28 Mar 2024 02:26:52 -0700 (PDT) Received: from [127.0.1.1] (laubervilliers-658-1-213-31.w90-63.abo.wanadoo.fr. [90.63.244.31]) by smtp.gmail.com with ESMTPSA id bd17-20020a05600c1f1100b0041493615585sm1673414wmb.39.2024.03.28.02.26.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 28 Mar 2024 02:26:52 -0700 (PDT) From: Julien Panis Date: Thu, 28 Mar 2024 10:26:42 +0100 Subject: [PATCH net-next v5 3/3] net: ethernet: ti: am65-cpsw: Add minimal XDP support Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20240223-am65-cpsw-xdp-basic-v5-3-bc1739170bc6@baylibre.com> References: <20240223-am65-cpsw-xdp-basic-v5-0-bc1739170bc6@baylibre.com> In-Reply-To: <20240223-am65-cpsw-xdp-basic-v5-0-bc1739170bc6@baylibre.com> To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Russell King , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , Simon Horman , Andrew Lunn , Ratheesh Kannoth Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, Julien Panis X-Mailer: b4 0.12.2 X-Developer-Signature: v=1; a=ed25519-sha256; t=1711618007; l=30705; i=jpanis@baylibre.com; s=20230526; h=from:subject:message-id; bh=zgp5eoEE6NS70zq98VkGa801JhOcNVJkgMJXMQwWaHY=; b=LiWzkmXW44e0FRM6/KFKnEKVBSAfFinwB5fsUPO13DkMvzZEMKmjPy5qxg+nz/oQs3qdQlyqD so8uUJj3WspA4eAa4Xar+C+ldC5WXJ7ndyaaVirc+gIffXY7JsdgQzu X-Developer-Key: i=jpanis@baylibre.com; a=ed25519; pk=8eSM4/xkiHWz2M1Cw1U3m2/YfPbsUdEJPCWY3Mh9ekQ= This patch adds XDP (eXpress Data Path) support to TI AM65 CPSW Ethernet driver. The following features are implemented: - NETDEV_XDP_ACT_BASIC (XDP_PASS, XDP_TX, XDP_DROP, XDP_ABORTED) - NETDEV_XDP_ACT_REDIRECT (XDP_REDIRECT) - NETDEV_XDP_ACT_NDO_XMIT (ndo_xdp_xmit callback) The page pool memory model is used to get better performance. Below are benchmark results obtained for the receiver with iperf3 default parameters: - Without page pool: 495 Mbits/sec - With page pool: 505 Mbits/sec (actually 510 Mbits/sec, with a 5 Mbits/sec loss due to extra processing in the hot path to handle XDP). Signed-off-by: Julien Panis --- drivers/net/ethernet/ti/am65-cpsw-nuss.c | 536 +++++++++++++++++++++++++++= +--- drivers/net/ethernet/ti/am65-cpsw-nuss.h | 13 + 2 files changed, 499 insertions(+), 50 deletions(-) diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/etherne= t/ti/am65-cpsw-nuss.c index 9d2f4ac783e4..67239c35d346 100644 --- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c +++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c @@ -5,6 +5,7 @@ * */ =20 +#include #include #include #include @@ -30,6 +31,7 @@ #include #include #include +#include #include =20 #include "cpsw_ale.h" @@ -138,6 +140,17 @@ =20 #define AM65_CPSW_DEFAULT_TX_CHNS 8 =20 +/* CPPI streaming packet interface */ +#define AM65_CPSW_CPPI_TX_FLOW_ID 0x3FFF +#define AM65_CPSW_CPPI_TX_PKT_TYPE 0x7 + +/* XDP */ +#define AM65_CPSW_XDP_CONSUMED 1 +#define AM65_CPSW_XDP_PASS 0 + +/* Include headroom compatible with both skb and xdpf */ +#define AM65_CPSW_HEADROOM max(NET_SKB_PAD, XDP_PACKET_HEADROOM) + static void am65_cpsw_port_set_sl_mac(struct am65_cpsw_port *slave, const u8 *dev_addr) { @@ -369,6 +382,66 @@ static void am65_cpsw_init_host_port_emac(struct am65_= cpsw_common *common); static void am65_cpsw_init_port_switch_ale(struct am65_cpsw_port *port); static void am65_cpsw_init_port_emac_ale(struct am65_cpsw_port *port); =20 +static void am65_cpsw_destroy_xdp_rxqs(struct am65_cpsw_common *common) +{ + struct am65_cpsw_rx_chn *rx_chn =3D &common->rx_chns; + struct xdp_rxq_info *rxq; + int i; + + for (i =3D 0; i < common->port_num; i++) { + rxq =3D &common->ports[i].xdp_rxq; + + if (xdp_rxq_info_is_reg(rxq)) + xdp_rxq_info_unreg(rxq); + } + + if (rx_chn->page_pool) { + page_pool_destroy(rx_chn->page_pool); + rx_chn->page_pool =3D NULL; + } +} + +static int am65_cpsw_create_xdp_rxqs(struct am65_cpsw_common *common) +{ + struct am65_cpsw_rx_chn *rx_chn =3D &common->rx_chns; + struct page_pool_params pp_params =3D { + .flags =3D PP_FLAG_DMA_MAP, + .order =3D 0, + .pool_size =3D AM65_CPSW_MAX_RX_DESC, + .nid =3D dev_to_node(common->dev), + .dev =3D common->dev, + .dma_dir =3D DMA_BIDIRECTIONAL, + .napi =3D &common->napi_rx, + }; + struct xdp_rxq_info *rxq; + struct page_pool *pool; + int i, ret; + + pool =3D page_pool_create(&pp_params); + if (IS_ERR(pool)) + return PTR_ERR(pool); + + rx_chn->page_pool =3D pool; + + for (i =3D 0; i < common->port_num; i++) { + rxq =3D &common->ports[i].xdp_rxq; + + ret =3D xdp_rxq_info_reg(rxq, common->ports[i].ndev, i, 0); + if (ret) + goto err; + + ret =3D xdp_rxq_info_reg_mem_model(rxq, MEM_TYPE_PAGE_POOL, pool); + if (ret) + goto err; + } + + return 0; + +err: + am65_cpsw_destroy_xdp_rxqs(common); + return ret; +} + static void am65_cpsw_nuss_rx_cleanup(void *data, dma_addr_t desc_dma) { struct am65_cpsw_rx_chn *rx_chn =3D data; @@ -440,9 +513,40 @@ static void am65_cpsw_nuss_tx_cleanup(void *data, dma_= addr_t desc_dma) dev_kfree_skb_any(skb); } =20 +static struct sk_buff *am65_cpsw_alloc_skb(struct am65_cpsw_rx_chn *rx_chn, + struct net_device *ndev, + unsigned int len, + int desc_idx) +{ + struct sk_buff *skb; + struct page *page; + + page =3D page_pool_dev_alloc_pages(rx_chn->page_pool); + if (unlikely(!page)) + return NULL; + + len +=3D AM65_CPSW_HEADROOM; + + skb =3D build_skb(page_address(page), len); + if (unlikely(!skb)) { + page_pool_put_full_page(rx_chn->page_pool, page, ndev); + rx_chn->pages[desc_idx] =3D NULL; + return NULL; + } + + skb_reserve(skb, AM65_CPSW_HEADROOM + NET_IP_ALIGN); + skb->dev =3D ndev; + + rx_chn->pages[desc_idx] =3D page; + + return skb; +} + static int am65_cpsw_nuss_common_open(struct am65_cpsw_common *common) { struct am65_cpsw_host *host_p =3D am65_common_get_host(common); + struct am65_cpsw_rx_chn *rx_chn =3D &common->rx_chns; + struct am65_cpsw_tx_chn *tx_chn =3D common->tx_chns; int port_idx, i, ret, tx; struct sk_buff *skb; u32 val, port_mask; @@ -505,10 +609,14 @@ static int am65_cpsw_nuss_common_open(struct am65_cps= w_common *common) =20 am65_cpsw_qos_tx_p0_rate_init(common); =20 - for (i =3D 0; i < common->rx_chns.descs_num; i++) { - skb =3D __netdev_alloc_skb_ip_align(NULL, - AM65_CPSW_MAX_PACKET_SIZE, - GFP_KERNEL); + ret =3D am65_cpsw_create_xdp_rxqs(common); + if (ret) { + dev_err(common->dev, "Failed to create XDP rx queues\n"); + return ret; + } + + for (i =3D 0; i < rx_chn->descs_num; i++) { + skb =3D am65_cpsw_alloc_skb(rx_chn, NULL, AM65_CPSW_MAX_PACKET_SIZE, i); if (!skb) { ret =3D -ENOMEM; dev_err(common->dev, "cannot allocate skb\n"); @@ -531,27 +639,27 @@ static int am65_cpsw_nuss_common_open(struct am65_cps= w_common *common) } } =20 - ret =3D k3_udma_glue_enable_rx_chn(common->rx_chns.rx_chn); + ret =3D k3_udma_glue_enable_rx_chn(rx_chn->rx_chn); if (ret) { dev_err(common->dev, "couldn't enable rx chn: %d\n", ret); goto fail_rx; } =20 for (tx =3D 0; tx < common->tx_ch_num; tx++) { - ret =3D k3_udma_glue_enable_tx_chn(common->tx_chns[tx].tx_chn); + ret =3D k3_udma_glue_enable_tx_chn(tx_chn[tx].tx_chn); if (ret) { dev_err(common->dev, "couldn't enable tx chn %d: %d\n", tx, ret); tx--; goto fail_tx; } - napi_enable(&common->tx_chns[tx].napi_tx); + napi_enable(&tx_chn[tx].napi_tx); } =20 napi_enable(&common->napi_rx); if (common->rx_irq_disabled) { common->rx_irq_disabled =3D false; - enable_irq(common->rx_chns.irq); + enable_irq(rx_chn->irq); } =20 dev_dbg(common->dev, "cpsw_nuss started\n"); @@ -559,22 +667,23 @@ static int am65_cpsw_nuss_common_open(struct am65_cps= w_common *common) =20 fail_tx: while (tx >=3D 0) { - napi_disable(&common->tx_chns[tx].napi_tx); - k3_udma_glue_disable_tx_chn(common->tx_chns[tx].tx_chn); + napi_disable(&tx_chn[tx].napi_tx); + k3_udma_glue_disable_tx_chn(tx_chn[tx].tx_chn); tx--; } =20 - k3_udma_glue_disable_rx_chn(common->rx_chns.rx_chn); + k3_udma_glue_disable_rx_chn(rx_chn->rx_chn); =20 fail_rx: - k3_udma_glue_reset_rx_chn(common->rx_chns.rx_chn, 0, - &common->rx_chns, + k3_udma_glue_reset_rx_chn(rx_chn->rx_chn, 0, rx_chn, am65_cpsw_nuss_rx_cleanup, 0); return ret; } =20 static int am65_cpsw_nuss_common_stop(struct am65_cpsw_common *common) { + struct am65_cpsw_rx_chn *rx_chn =3D &common->rx_chns; + struct am65_cpsw_tx_chn *tx_chn =3D common->tx_chns; int i; =20 if (common->usage_count !=3D 1) @@ -590,26 +699,25 @@ static int am65_cpsw_nuss_common_stop(struct am65_cps= w_common *common) reinit_completion(&common->tdown_complete); =20 for (i =3D 0; i < common->tx_ch_num; i++) - k3_udma_glue_tdown_tx_chn(common->tx_chns[i].tx_chn, false); + k3_udma_glue_tdown_tx_chn(tx_chn[i].tx_chn, false); =20 i =3D wait_for_completion_timeout(&common->tdown_complete, msecs_to_jiffies(1000)); if (!i) dev_err(common->dev, "tx timeout\n"); for (i =3D 0; i < common->tx_ch_num; i++) { - napi_disable(&common->tx_chns[i].napi_tx); - hrtimer_cancel(&common->tx_chns[i].tx_hrtimer); + napi_disable(&tx_chn[i].napi_tx); + hrtimer_cancel(&tx_chn[i].tx_hrtimer); } =20 for (i =3D 0; i < common->tx_ch_num; i++) { - k3_udma_glue_reset_tx_chn(common->tx_chns[i].tx_chn, - &common->tx_chns[i], + k3_udma_glue_reset_tx_chn(tx_chn[i].tx_chn, &tx_chn[i], am65_cpsw_nuss_tx_cleanup); - k3_udma_glue_disable_tx_chn(common->tx_chns[i].tx_chn); + k3_udma_glue_disable_tx_chn(tx_chn[i].tx_chn); } =20 reinit_completion(&common->tdown_complete); - k3_udma_glue_tdown_rx_chn(common->rx_chns.rx_chn, true); + k3_udma_glue_tdown_rx_chn(rx_chn->rx_chn, true); =20 if (common->pdata.quirks & AM64_CPSW_QUIRK_DMA_RX_TDOWN_IRQ) { i =3D wait_for_completion_timeout(&common->tdown_complete, msecs_to_jiff= ies(1000)); @@ -621,17 +729,24 @@ static int am65_cpsw_nuss_common_stop(struct am65_cps= w_common *common) hrtimer_cancel(&common->rx_hrtimer); =20 for (i =3D 0; i < AM65_CPSW_MAX_RX_FLOWS; i++) - k3_udma_glue_reset_rx_chn(common->rx_chns.rx_chn, i, - &common->rx_chns, + k3_udma_glue_reset_rx_chn(rx_chn->rx_chn, i, rx_chn, am65_cpsw_nuss_rx_cleanup, !!i); =20 - k3_udma_glue_disable_rx_chn(common->rx_chns.rx_chn); + k3_udma_glue_disable_rx_chn(rx_chn->rx_chn); =20 cpsw_ale_stop(common->ale); =20 writel(0, common->cpsw_base + AM65_CPSW_REG_CTL); writel(0, common->cpsw_base + AM65_CPSW_REG_STAT_PORT_EN); =20 + for (i =3D 0; i < rx_chn->descs_num; i++) { + if (rx_chn->pages[i]) { + page_pool_put_full_page(rx_chn->page_pool, rx_chn->pages[i], false); + rx_chn->pages[i] =3D NULL; + } + } + am65_cpsw_destroy_xdp_rxqs(common); + dev_dbg(common->dev, "cpsw_nuss stopped\n"); return 0; } @@ -749,6 +864,176 @@ static int am65_cpsw_nuss_ndo_slave_open(struct net_d= evice *ndev) return ret; } =20 +static int am65_cpsw_nuss_desc_idx(struct k3_cppi_desc_pool *desc_pool, vo= id *desc, + unsigned char dsize_log2) +{ + void *pool_addr =3D k3_cppi_desc_pool_cpuaddr(desc_pool); + + return (desc - pool_addr) >> dsize_log2; +} + +static void am65_cpsw_nuss_set_buf_type(struct am65_cpsw_tx_chn *tx_chn, + struct cppi5_host_desc_t *desc, + enum am65_cpsw_tx_buf_type buf_type) +{ + int desc_idx; + + desc_idx =3D am65_cpsw_nuss_desc_idx(tx_chn->desc_pool, desc, tx_chn->dsi= ze_log2); + k3_cppi_desc_pool_desc_info_set(tx_chn->desc_pool, desc_idx, (void *)buf_= type); +} + +static enum am65_cpsw_tx_buf_type am65_cpsw_nuss_buf_type(struct am65_cpsw= _tx_chn *tx_chn, + dma_addr_t desc_dma) +{ + struct cppi5_host_desc_t *desc_tx; + int desc_idx; + + desc_tx =3D k3_cppi_desc_pool_dma2virt(tx_chn->desc_pool, desc_dma); + desc_idx =3D am65_cpsw_nuss_desc_idx(tx_chn->desc_pool, desc_tx, tx_chn->= dsize_log2); + + return (enum am65_cpsw_tx_buf_type)k3_cppi_desc_pool_desc_info(tx_chn->de= sc_pool, + desc_idx); +} + +static int am65_cpsw_xdp_tx_frame(struct net_device *ndev, + struct am65_cpsw_tx_chn *tx_chn, + struct xdp_frame *xdpf, + enum am65_cpsw_tx_buf_type buf_type) +{ + struct am65_cpsw_common *common =3D am65_ndev_to_common(ndev); + struct am65_cpsw_port *port =3D am65_ndev_to_port(ndev); + struct cppi5_host_desc_t *host_desc; + struct netdev_queue *netif_txq; + dma_addr_t dma_desc, dma_buf; + u32 pkt_len =3D xdpf->len; + void **swdata; + int ret; + + host_desc =3D k3_cppi_desc_pool_alloc(tx_chn->desc_pool); + if (unlikely(!host_desc)) { + ndev->stats.tx_dropped++; + return -ENOMEM; + } + + am65_cpsw_nuss_set_buf_type(tx_chn, host_desc, buf_type); + + dma_buf =3D dma_map_single(tx_chn->dma_dev, xdpf->data, pkt_len, DMA_TO_D= EVICE); + if (unlikely(dma_mapping_error(tx_chn->dma_dev, dma_buf))) { + ndev->stats.tx_dropped++; + ret =3D -ENOMEM; + goto pool_free; + } + + cppi5_hdesc_init(host_desc, CPPI5_INFO0_HDESC_EPIB_PRESENT, AM65_CPSW_NAV= _PS_DATA_SIZE); + cppi5_hdesc_set_pkttype(host_desc, AM65_CPSW_CPPI_TX_PKT_TYPE); + cppi5_hdesc_set_pktlen(host_desc, pkt_len); + cppi5_desc_set_pktids(&host_desc->hdr, 0, AM65_CPSW_CPPI_TX_FLOW_ID); + cppi5_desc_set_tags_ids(&host_desc->hdr, 0, port->port_id); + + k3_udma_glue_tx_dma_to_cppi5_addr(tx_chn->tx_chn, &dma_buf); + cppi5_hdesc_attach_buf(host_desc, dma_buf, pkt_len, dma_buf, pkt_len); + + swdata =3D cppi5_hdesc_get_swdata(host_desc); + *(swdata) =3D xdpf; + + /* Report BQL before sending the packet */ + netif_txq =3D netdev_get_tx_queue(ndev, tx_chn->id); + netdev_tx_sent_queue(netif_txq, pkt_len); + + dma_desc =3D k3_cppi_desc_pool_virt2dma(tx_chn->desc_pool, host_desc); + if (AM65_CPSW_IS_CPSW2G(common)) { + ret =3D k3_udma_glue_push_tx_chn(tx_chn->tx_chn, host_desc, dma_desc); + } else { + spin_lock_bh(&tx_chn->lock); + ret =3D k3_udma_glue_push_tx_chn(tx_chn->tx_chn, host_desc, dma_desc); + spin_unlock_bh(&tx_chn->lock); + } + if (ret) { + /* Inform BQL */ + netdev_tx_completed_queue(netif_txq, 1, pkt_len); + ndev->stats.tx_errors++; + goto dma_unmap; + } + + return 0; + +dma_unmap: + k3_udma_glue_tx_cppi5_to_dma_addr(tx_chn->tx_chn, &dma_buf); + dma_unmap_single(tx_chn->dma_dev, dma_buf, pkt_len, DMA_TO_DEVICE); +pool_free: + k3_cppi_desc_pool_free(tx_chn->desc_pool, host_desc); + return ret; +} + +static int am65_cpsw_run_xdp(struct am65_cpsw_common *common, struct am65_= cpsw_port *port, + struct xdp_buff *xdp, int desc_idx, int cpu, int *len) +{ + struct am65_cpsw_rx_chn *rx_chn =3D &common->rx_chns; + struct net_device *ndev =3D port->ndev; + int ret =3D AM65_CPSW_XDP_CONSUMED; + struct am65_cpsw_tx_chn *tx_chn; + struct netdev_queue *netif_txq; + struct xdp_frame *xdpf; + struct bpf_prog *prog; + struct page *page; + u32 act; + + prog =3D READ_ONCE(port->xdp_prog); + if (!prog) + return AM65_CPSW_XDP_PASS; + + act =3D bpf_prog_run_xdp(prog, xdp); + /* XDP prog might have changed packet data and boundaries */ + *len =3D xdp->data_end - xdp->data; + + switch (act) { + case XDP_PASS: + ret =3D AM65_CPSW_XDP_PASS; + goto out; + case XDP_TX: + tx_chn =3D &common->tx_chns[cpu % AM65_CPSW_MAX_TX_QUEUES]; + netif_txq =3D netdev_get_tx_queue(ndev, tx_chn->id); + + xdpf =3D xdp_convert_buff_to_frame(xdp); + if (unlikely(!xdpf)) + break; + + __netif_tx_lock(netif_txq, cpu); + ret =3D am65_cpsw_xdp_tx_frame(ndev, tx_chn, xdpf, + AM65_CPSW_TX_BUF_TYPE_XDP_TX); + __netif_tx_unlock(netif_txq); + if (ret) + break; + + ndev->stats.rx_bytes +=3D *len; + ndev->stats.rx_packets++; + ret =3D AM65_CPSW_XDP_CONSUMED; + goto out; + case XDP_REDIRECT: + if (unlikely(xdp_do_redirect(ndev, xdp, prog))) + break; + + xdp_do_flush(); + ndev->stats.rx_bytes +=3D *len; + ndev->stats.rx_packets++; + goto out; + default: + bpf_warn_invalid_xdp_action(ndev, prog, act); + fallthrough; + case XDP_ABORTED: + trace_xdp_exception(ndev, prog, act); + fallthrough; + case XDP_DROP: + ndev->stats.rx_dropped++; + } + + page =3D virt_to_head_page(xdp->data); + page_pool_recycle_direct(rx_chn->page_pool, page); + rx_chn->pages[desc_idx] =3D NULL; +out: + return ret; +} + static void am65_cpsw_nuss_rx_ts(struct sk_buff *skb, u32 *psdata) { struct skb_shared_hwtstamps *ssh; @@ -795,7 +1080,7 @@ static void am65_cpsw_nuss_rx_csum(struct sk_buff *skb= , u32 csum_info) } =20 static int am65_cpsw_nuss_rx_packets(struct am65_cpsw_common *common, - u32 flow_idx) + u32 flow_idx, int cpu) { struct am65_cpsw_rx_chn *rx_chn =3D &common->rx_chns; u32 buf_dma_len, pkt_len, port_id =3D 0, csum_info; @@ -806,10 +1091,12 @@ static int am65_cpsw_nuss_rx_packets(struct am65_cps= w_common *common, struct sk_buff *skb, *new_skb; dma_addr_t desc_dma, buf_dma; struct am65_cpsw_port *port; + int headroom, desc_idx, ret; struct net_device *ndev; + struct xdp_buff xdp; + struct page *page; void **swdata; u32 *psdata; - int ret =3D 0; =20 ret =3D k3_udma_glue_pop_rx_chn(rx_chn->rx_chn, flow_idx, &desc_dma); if (ret) { @@ -851,11 +1138,30 @@ static int am65_cpsw_nuss_rx_packets(struct am65_cps= w_common *common, =20 k3_cppi_desc_pool_free(rx_chn->desc_pool, desc_rx); =20 - new_skb =3D netdev_alloc_skb_ip_align(ndev, AM65_CPSW_MAX_PACKET_SIZE); + desc_idx =3D am65_cpsw_nuss_desc_idx(rx_chn->desc_pool, desc_rx, rx_chn->= dsize_log2); + + if (port->xdp_prog) { + xdp_init_buff(&xdp, AM65_CPSW_MAX_PACKET_SIZE, &port->xdp_rxq); + + page =3D virt_to_page(skb->data); + xdp_prepare_buff(&xdp, page_address(page), skb_headroom(skb), pkt_len, f= alse); + + ret =3D am65_cpsw_run_xdp(common, port, &xdp, desc_idx, cpu, &pkt_len); + if (ret !=3D AM65_CPSW_XDP_PASS) + return ret; + + /* Compute additional headroom to be reserved */ + headroom =3D (xdp.data - xdp.data_hard_start) - skb_headroom(skb); + skb_reserve(skb, headroom); + } + + /* Pass skb to netstack if no XDP prog or returned XDP_PASS */ + new_skb =3D am65_cpsw_alloc_skb(rx_chn, ndev, AM65_CPSW_MAX_PACKET_SIZE, = desc_idx); if (new_skb) { ndev_priv =3D netdev_priv(ndev); am65_cpsw_nuss_set_offload_fwd_mark(skb, ndev_priv->offload_fwd_mark); skb_put(skb, pkt_len); + skb_mark_for_recycle(skb); skb->protocol =3D eth_type_trans(skb, ndev); am65_cpsw_nuss_rx_csum(skb, csum_info); napi_gro_receive(&common->napi_rx, skb); @@ -901,6 +1207,7 @@ static int am65_cpsw_nuss_rx_poll(struct napi_struct *= napi_rx, int budget) { struct am65_cpsw_common *common =3D am65_cpsw_napi_to_common(napi_rx); int flow =3D AM65_CPSW_MAX_RX_FLOWS; + int cpu =3D smp_processor_id(); int cur_budget, ret; int num_rx =3D 0; =20 @@ -909,7 +1216,7 @@ static int am65_cpsw_nuss_rx_poll(struct napi_struct *= napi_rx, int budget) cur_budget =3D budget - num_rx; =20 while (cur_budget--) { - ret =3D am65_cpsw_nuss_rx_packets(common, flow); + ret =3D am65_cpsw_nuss_rx_packets(common, flow, cpu); if (ret) break; num_rx++; @@ -938,8 +1245,8 @@ static int am65_cpsw_nuss_rx_poll(struct napi_struct *= napi_rx, int budget) } =20 static struct sk_buff * -am65_cpsw_nuss_tx_compl_packet(struct am65_cpsw_tx_chn *tx_chn, - dma_addr_t desc_dma) +am65_cpsw_nuss_tx_compl_packet_skb(struct am65_cpsw_tx_chn *tx_chn, + dma_addr_t desc_dma) { struct am65_cpsw_ndev_priv *ndev_priv; struct am65_cpsw_ndev_stats *stats; @@ -968,6 +1275,39 @@ am65_cpsw_nuss_tx_compl_packet(struct am65_cpsw_tx_ch= n *tx_chn, return skb; } =20 +static struct xdp_frame * +am65_cpsw_nuss_tx_compl_packet_xdp(struct am65_cpsw_common *common, + struct am65_cpsw_tx_chn *tx_chn, + dma_addr_t desc_dma, + struct net_device **ndev) +{ + struct am65_cpsw_ndev_priv *ndev_priv; + struct am65_cpsw_ndev_stats *stats; + struct cppi5_host_desc_t *desc_tx; + struct am65_cpsw_port *port; + struct xdp_frame *xdpf; + u32 port_id =3D 0; + void **swdata; + + desc_tx =3D k3_cppi_desc_pool_dma2virt(tx_chn->desc_pool, desc_dma); + cppi5_desc_get_tags_ids(&desc_tx->hdr, NULL, &port_id); + swdata =3D cppi5_hdesc_get_swdata(desc_tx); + xdpf =3D *(swdata); + am65_cpsw_nuss_xmit_free(tx_chn, desc_tx); + + port =3D am65_common_get_port(common, port_id); + *ndev =3D port->ndev; + + ndev_priv =3D netdev_priv(*ndev); + stats =3D this_cpu_ptr(ndev_priv->stats); + u64_stats_update_begin(&stats->syncp); + stats->tx_packets++; + stats->tx_bytes +=3D xdpf->len; + u64_stats_update_end(&stats->syncp); + + return xdpf; +} + static void am65_cpsw_nuss_tx_wake(struct am65_cpsw_tx_chn *tx_chn, struct= net_device *ndev, struct netdev_queue *netif_txq) { @@ -988,11 +1328,13 @@ static void am65_cpsw_nuss_tx_wake(struct am65_cpsw_= tx_chn *tx_chn, struct net_d static int am65_cpsw_nuss_tx_compl_packets(struct am65_cpsw_common *common, int chn, unsigned int budget, bool *tdown) { + enum am65_cpsw_tx_buf_type buf_type; struct device *dev =3D common->dev; struct am65_cpsw_tx_chn *tx_chn; struct netdev_queue *netif_txq; unsigned int total_bytes =3D 0; struct net_device *ndev; + struct xdp_frame *xdpf; struct sk_buff *skb; dma_addr_t desc_dma; int res, num_tx =3D 0; @@ -1013,10 +1355,20 @@ static int am65_cpsw_nuss_tx_compl_packets(struct a= m65_cpsw_common *common, break; } =20 - skb =3D am65_cpsw_nuss_tx_compl_packet(tx_chn, desc_dma); - total_bytes =3D skb->len; - ndev =3D skb->dev; - napi_consume_skb(skb, budget); + buf_type =3D am65_cpsw_nuss_buf_type(tx_chn, desc_dma); + if (buf_type =3D=3D AM65_CPSW_TX_BUF_TYPE_SKB) { + skb =3D am65_cpsw_nuss_tx_compl_packet_skb(tx_chn, desc_dma); + ndev =3D skb->dev; + total_bytes =3D skb->len; + napi_consume_skb(skb, budget); + } else { + xdpf =3D am65_cpsw_nuss_tx_compl_packet_xdp(common, tx_chn, desc_dma, &= ndev); + total_bytes =3D xdpf->len; + if (buf_type =3D=3D AM65_CPSW_TX_BUF_TYPE_XDP_TX) + xdp_return_frame_rx_napi(xdpf); + else + xdp_return_frame(xdpf); + } num_tx++; =20 netif_txq =3D netdev_get_tx_queue(ndev, chn); @@ -1034,11 +1386,13 @@ static int am65_cpsw_nuss_tx_compl_packets(struct a= m65_cpsw_common *common, static int am65_cpsw_nuss_tx_compl_packets_2g(struct am65_cpsw_common *com= mon, int chn, unsigned int budget, bool *tdown) { + enum am65_cpsw_tx_buf_type buf_type; struct device *dev =3D common->dev; struct am65_cpsw_tx_chn *tx_chn; struct netdev_queue *netif_txq; unsigned int total_bytes =3D 0; struct net_device *ndev; + struct xdp_frame *xdpf; struct sk_buff *skb; dma_addr_t desc_dma; int res, num_tx =3D 0; @@ -1057,11 +1411,20 @@ static int am65_cpsw_nuss_tx_compl_packets_2g(struc= t am65_cpsw_common *common, break; } =20 - skb =3D am65_cpsw_nuss_tx_compl_packet(tx_chn, desc_dma); - - ndev =3D skb->dev; - total_bytes +=3D skb->len; - napi_consume_skb(skb, budget); + buf_type =3D am65_cpsw_nuss_buf_type(tx_chn, desc_dma); + if (buf_type =3D=3D AM65_CPSW_TX_BUF_TYPE_SKB) { + skb =3D am65_cpsw_nuss_tx_compl_packet_skb(tx_chn, desc_dma); + ndev =3D skb->dev; + total_bytes +=3D skb->len; + napi_consume_skb(skb, budget); + } else { + xdpf =3D am65_cpsw_nuss_tx_compl_packet_xdp(common, tx_chn, desc_dma, &= ndev); + total_bytes +=3D xdpf->len; + if (buf_type =3D=3D AM65_CPSW_TX_BUF_TYPE_XDP_TX) + xdp_return_frame_rx_napi(xdpf); + else + xdp_return_frame(xdpf); + } num_tx++; } =20 @@ -1183,10 +1546,12 @@ static netdev_tx_t am65_cpsw_nuss_ndo_slave_xmit(st= ruct sk_buff *skb, goto busy_stop_q; } =20 + am65_cpsw_nuss_set_buf_type(tx_chn, first_desc, AM65_CPSW_TX_BUF_TYPE_SKB= ); + cppi5_hdesc_init(first_desc, CPPI5_INFO0_HDESC_EPIB_PRESENT, AM65_CPSW_NAV_PS_DATA_SIZE); - cppi5_desc_set_pktids(&first_desc->hdr, 0, 0x3FFF); - cppi5_hdesc_set_pkttype(first_desc, 0x7); + cppi5_desc_set_pktids(&first_desc->hdr, 0, AM65_CPSW_CPPI_TX_FLOW_ID); + cppi5_hdesc_set_pkttype(first_desc, AM65_CPSW_CPPI_TX_PKT_TYPE); cppi5_desc_set_tags_ids(&first_desc->hdr, 0, port->port_id); =20 k3_udma_glue_tx_dma_to_cppi5_addr(tx_chn->tx_chn, &buf_dma); @@ -1225,6 +1590,8 @@ static netdev_tx_t am65_cpsw_nuss_ndo_slave_xmit(stru= ct sk_buff *skb, goto busy_free_descs; } =20 + am65_cpsw_nuss_set_buf_type(tx_chn, next_desc, AM65_CPSW_TX_BUF_TYPE_SKB= ); + buf_dma =3D skb_frag_dma_map(tx_chn->dma_dev, frag, 0, frag_size, DMA_TO_DEVICE); if (unlikely(dma_mapping_error(tx_chn->dma_dev, buf_dma))) { @@ -1488,6 +1855,58 @@ static void am65_cpsw_nuss_ndo_get_stats(struct net_= device *dev, stats->tx_dropped =3D dev->stats.tx_dropped; } =20 +static int am65_cpsw_xdp_prog_setup(struct net_device *ndev, struct bpf_pr= og *prog) +{ + struct am65_cpsw_port *port =3D am65_ndev_to_port(ndev); + bool running =3D netif_running(ndev); + struct bpf_prog *old_prog; + + if (running) + am65_cpsw_nuss_ndo_slave_stop(ndev); + + old_prog =3D xchg(&port->xdp_prog, prog); + if (old_prog) + bpf_prog_put(old_prog); + + if (running) + return am65_cpsw_nuss_ndo_slave_open(ndev); + + return 0; +} + +static int am65_cpsw_ndo_bpf(struct net_device *ndev, struct netdev_bpf *b= pf) +{ + switch (bpf->command) { + case XDP_SETUP_PROG: + return am65_cpsw_xdp_prog_setup(ndev, bpf->prog); + default: + return -EINVAL; + } +} + +static int am65_cpsw_ndo_xdp_xmit(struct net_device *ndev, int n, + struct xdp_frame **frames, u32 flags) +{ + struct am65_cpsw_tx_chn *tx_chn; + struct netdev_queue *netif_txq; + int cpu =3D smp_processor_id(); + int i, nxmit =3D 0; + + tx_chn =3D &am65_ndev_to_common(ndev)->tx_chns[cpu % AM65_CPSW_MAX_TX_QUE= UES]; + netif_txq =3D netdev_get_tx_queue(ndev, tx_chn->id); + + __netif_tx_lock(netif_txq, cpu); + for (i =3D 0; i < n; i++) { + if (am65_cpsw_xdp_tx_frame(ndev, tx_chn, frames[i], + AM65_CPSW_TX_BUF_TYPE_XDP_NDO)) + break; + nxmit++; + } + __netif_tx_unlock(netif_txq); + + return nxmit; +} + static const struct net_device_ops am65_cpsw_nuss_netdev_ops =3D { .ndo_open =3D am65_cpsw_nuss_ndo_slave_open, .ndo_stop =3D am65_cpsw_nuss_ndo_slave_stop, @@ -1502,6 +1921,8 @@ static const struct net_device_ops am65_cpsw_nuss_net= dev_ops =3D { .ndo_eth_ioctl =3D am65_cpsw_nuss_ndo_slave_ioctl, .ndo_setup_tc =3D am65_cpsw_qos_ndo_setup_tc, .ndo_set_tx_maxrate =3D am65_cpsw_qos_ndo_tx_p0_set_maxrate, + .ndo_bpf =3D am65_cpsw_ndo_bpf, + .ndo_xdp_xmit =3D am65_cpsw_ndo_xdp_xmit, }; =20 static void am65_cpsw_disable_phy(struct phy *phy) @@ -1772,7 +2193,7 @@ static int am65_cpsw_nuss_init_tx_chns(struct am65_cp= sw_common *common) .mode =3D K3_RINGACC_RING_MODE_RING, .flags =3D 0 }; - u32 hdesc_size; + u32 hdesc_size, hdesc_size_out; int i, ret =3D 0; =20 hdesc_size =3D cppi5_hdesc_calc_size(true, AM65_CPSW_NAV_PS_DATA_SIZE, @@ -1816,6 +2237,10 @@ static int am65_cpsw_nuss_init_tx_chns(struct am65_c= psw_common *common) goto err; } =20 + hdesc_size_out =3D k3_cppi_desc_pool_desc_size(tx_chn->desc_pool); + tx_chn->dsize_log2 =3D __fls(hdesc_size_out); + WARN_ON(hdesc_size_out !=3D (1 << tx_chn->dsize_log2)); + tx_chn->irq =3D k3_udma_glue_tx_get_irq(tx_chn->tx_chn); if (tx_chn->irq < 0) { dev_err(dev, "Failed to get tx dma irq %d\n", @@ -1862,8 +2287,8 @@ static void am65_cpsw_nuss_free_rx_chns(void *data) static void am65_cpsw_nuss_remove_rx_chns(void *data) { struct am65_cpsw_common *common =3D data; - struct am65_cpsw_rx_chn *rx_chn; struct device *dev =3D common->dev; + struct am65_cpsw_rx_chn *rx_chn; =20 rx_chn =3D &common->rx_chns; devm_remove_action(dev, am65_cpsw_nuss_free_rx_chns, common); @@ -1873,11 +2298,7 @@ static void am65_cpsw_nuss_remove_rx_chns(void *data) =20 netif_napi_del(&common->napi_rx); =20 - if (!IS_ERR_OR_NULL(rx_chn->desc_pool)) - k3_cppi_desc_pool_destroy(rx_chn->desc_pool); - - if (!IS_ERR_OR_NULL(rx_chn->rx_chn)) - k3_udma_glue_release_rx_chn(rx_chn->rx_chn); + am65_cpsw_nuss_free_rx_chns(common); =20 common->rx_flow_id_base =3D -1; } @@ -1888,7 +2309,7 @@ static int am65_cpsw_nuss_init_rx_chns(struct am65_cp= sw_common *common) struct k3_udma_glue_rx_channel_cfg rx_cfg =3D { 0 }; u32 max_desc_num =3D AM65_CPSW_MAX_RX_DESC; struct device *dev =3D common->dev; - u32 hdesc_size; + u32 hdesc_size, hdesc_size_out; u32 fdqring_id; int i, ret =3D 0; =20 @@ -1920,6 +2341,16 @@ static int am65_cpsw_nuss_init_rx_chns(struct am65_c= psw_common *common) goto err; } =20 + hdesc_size_out =3D k3_cppi_desc_pool_desc_size(rx_chn->desc_pool); + rx_chn->dsize_log2 =3D __fls(hdesc_size_out); + WARN_ON(hdesc_size_out !=3D (1 << rx_chn->dsize_log2)); + + rx_chn->page_pool =3D NULL; + + rx_chn->pages =3D devm_kcalloc(dev, rx_chn->descs_num, sizeof(*rx_chn->pa= ges), GFP_KERNEL); + if (!rx_chn->pages) + return -ENOMEM; + common->rx_flow_id_base =3D k3_udma_glue_rx_get_flow_id_base(rx_chn->rx_chn); dev_info(dev, "set new flow-id-base %u\n", common->rx_flow_id_base); @@ -2252,6 +2683,9 @@ am65_cpsw_nuss_init_port_ndev(struct am65_cpsw_common= *common, u32 port_idx) NETIF_F_HW_TC; port->ndev->features =3D port->ndev->hw_features | NETIF_F_HW_VLAN_CTAG_FILTER; + port->ndev->xdp_features =3D NETDEV_XDP_ACT_BASIC | + NETDEV_XDP_ACT_REDIRECT | + NETDEV_XDP_ACT_NDO_XMIT; port->ndev->vlan_features |=3D NETIF_F_SG; port->ndev->netdev_ops =3D &am65_cpsw_nuss_netdev_ops; port->ndev->ethtool_ops =3D &am65_cpsw_ethtool_ops_slave; @@ -2315,6 +2749,8 @@ am65_cpsw_nuss_init_port_ndev(struct am65_cpsw_common= *common, u32 port_idx) if (ret) dev_err(dev, "failed to add percpu stat free action %d\n", ret); =20 + port->xdp_prog =3D NULL; + if (!common->dma_ndev) common->dma_ndev =3D port->ndev; =20 @@ -2940,9 +3376,9 @@ static int am65_cpsw_nuss_probe(struct platform_devic= e *pdev) struct device_node *node; struct resource *res; struct clk *clk; + int ale_entries; u64 id_temp; int ret, i; - int ale_entries; =20 common =3D devm_kzalloc(dev, sizeof(struct am65_cpsw_common), GFP_KERNEL); if (!common) @@ -3154,10 +3590,10 @@ static int am65_cpsw_nuss_suspend(struct device *de= v) static int am65_cpsw_nuss_resume(struct device *dev) { struct am65_cpsw_common *common =3D dev_get_drvdata(dev); + struct am65_cpsw_host *host_p =3D am65_common_get_host(common); struct am65_cpsw_port *port; struct net_device *ndev; int i, ret; - struct am65_cpsw_host *host_p =3D am65_common_get_host(common); =20 ret =3D am65_cpsw_nuss_init_tx_chns(common); if (ret) diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.h b/drivers/net/etherne= t/ti/am65-cpsw-nuss.h index 7da0492dc091..d8ce88dc9c89 100644 --- a/drivers/net/ethernet/ti/am65-cpsw-nuss.h +++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.h @@ -14,6 +14,7 @@ #include #include #include +#include #include "am65-cpsw-qos.h" =20 struct am65_cpts; @@ -56,10 +57,18 @@ struct am65_cpsw_port { bool rx_ts_enabled; struct am65_cpsw_qos qos; struct devlink_port devlink_port; + struct bpf_prog *xdp_prog; + struct xdp_rxq_info xdp_rxq; /* Only for suspend resume context */ u32 vid_context; }; =20 +enum am65_cpsw_tx_buf_type { + AM65_CPSW_TX_BUF_TYPE_SKB, + AM65_CPSW_TX_BUF_TYPE_XDP_TX, + AM65_CPSW_TX_BUF_TYPE_XDP_NDO, +}; + struct am65_cpsw_host { struct am65_cpsw_common *common; void __iomem *port_base; @@ -80,6 +89,7 @@ struct am65_cpsw_tx_chn { int irq; u32 id; u32 descs_num; + unsigned char dsize_log2; char tx_chn_name[128]; u32 rate_mbps; }; @@ -89,7 +99,10 @@ struct am65_cpsw_rx_chn { struct device *dma_dev; struct k3_cppi_desc_pool *desc_pool; struct k3_udma_glue_rx_channel *rx_chn; + struct page_pool *page_pool; + struct page **pages; u32 descs_num; + unsigned char dsize_log2; int irq; }; =20 --=20 2.37.3