From nobody Fri Dec 19 08:25:21 2025 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 26D641C20 for ; Sun, 17 Dec 2023 08:09:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--almasrymina.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="vhvMowPD" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-dbcd8c649edso2264153276.3 for ; Sun, 17 Dec 2023 00:09:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1702800560; x=1703405360; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Dx3B6aLRQGDHWIOJfDNBvVD3RlXkr+3jhLG4738eUjE=; b=vhvMowPDr4sLum8/34TI9++Tk1uFJc07pqhf20tmo5zM3v8PhMagvrXPbIQ8OIpLBD 9xnezFGb/RekGg82Z/GCkdBWQH8ehOHy4FtpvNQRRwNlJVw9tmjp8h8warVitOJHtpOz uCBs4eDOgK3WjSGPoBJ9t3giV92Lupxo5w4CDXt8awG4/wCWQndY6NVMd7F6zklL3r2/ z/c7hfrYs3nRP2b4kQqK5ECfCVIB6ctsjfQDUbHU8y1zSgUSrpavluBB4tuJp2pOnOhC 6+nKGicouYy8xQ5y38Pi9O1TAy4D71VYzaVL+iZYL6i7990iKpF9NlgxespIru74W194 RpIA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702800560; x=1703405360; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Dx3B6aLRQGDHWIOJfDNBvVD3RlXkr+3jhLG4738eUjE=; b=s3P43b0Uri5Ye4sUvNeZb3dDEciRMY4Awk2cJtBywfaikj4081PD54etCMjGKPypW2 byrdUCAZlIY/8zFvqrNbouhuMmU2j5NlkbwK7qrmYTemD5Ru7gqJkKMGAzOewE33YkG7 i1di93KexiNsok7bleZlMuvueqw2QGOu1Agszarooyxe3jnq0hv+q/yjos4RpsNDypbi is9YrUDgkl7Q6Nk9/gZidk1G8mBzxCPWE9LKQlwGz0Mi/LRUz9DwSddAVE1oJvCWPOxM 12dEV9mj2U6tmRj9mQK35cWUnGWdKPRdYhZTTNBatLYH3pRKGklkn6JJ+XVJCJ0v9l6h d0cw== X-Gm-Message-State: AOJu0Yyni5GImNM0qHNw/Ng2pikZT79jhHHEIIZtXVhRUfGkPuVtOFMX dHBDyon4jNxd5HnDMuD0lorI7NEHtCrUHx5CsKASnu467ftoNXEI0m+zu7A6Y/XBOsJPI7vJUzB Qnzb/tcgACyhvemxeMIfdPrZwzDzKLdHG82HjbMjqSEvFNrC+3TnsjN0zAYG/s0wt6E7s7IZZcm pbrXnmE4I= X-Google-Smtp-Source: AGHT+IEwZTJQinwLs899XxaD5GQX67+mFW+pdrHRCj7vW1TcxI+XrOUDAC3B4t5e+jZNBioLBKKYu7HWw6u6KGTvBg== X-Received: from almasrymina.svl.corp.google.com ([2620:15c:2c4:200:3eb4:e132:f78a:5ba9]) (user=almasrymina job=sendgmr) by 2002:a05:6902:1343:b0:dbd:13b8:2598 with SMTP id g3-20020a056902134300b00dbd13b82598mr628107ybu.3.1702800560127; Sun, 17 Dec 2023 00:09:20 -0800 (PST) Date: Sun, 17 Dec 2023 00:09:09 -0800 In-Reply-To: <20231217080913.2025973-1-almasrymina@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20231217080913.2025973-1-almasrymina@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20231217080913.2025973-2-almasrymina@google.com> Subject: [PATCH net-next v2 1/3] vsock/virtio: use skb_frag_*() helpers From: Mina Almasry To: linux-kernel@vger.kernel.org, netdev@vger.kernel.org, kvm@vger.kernel.org, virtualization@lists.linux.dev Cc: Mina Almasry , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Stefan Hajnoczi , Stefano Garzarella , Jason Gunthorpe , "=?UTF-8?q?Christian=20K=C3=B6nig?=" , Shakeel Butt , Yunsheng Lin , Willem de Bruijn Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Minor fix for virtio: code wanting to access the fields inside an skb frag should use the skb_frag_*() helpers, instead of accessing the fields directly. This allows for extensions where the underlying memory is not a page. Signed-off-by: Mina Almasry --- v2: - Also fix skb_frag_off() + skb_frag_size() (David) - Did not apply the reviewed-by from Stefano since the patch changed relatively much. --- net/vmw_vsock/virtio_transport.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transp= ort.c index f495b9e5186b..1748268e0694 100644 --- a/net/vmw_vsock/virtio_transport.c +++ b/net/vmw_vsock/virtio_transport.c @@ -153,10 +153,10 @@ virtio_transport_send_pkt_work(struct work_struct *wo= rk) * 'virt_to_phys()' later to fill the buffer descriptor. * We don't touch memory at "virtual" address of this page. */ - va =3D page_to_virt(skb_frag->bv_page); + va =3D page_to_virt(skb_frag_page(skb_frag)); sg_init_one(sgs[out_sg], - va + skb_frag->bv_offset, - skb_frag->bv_len); + va + skb_frag_off(skb_frag), + skb_frag_size(skb_frag)); out_sg++; } } --=20 2.43.0.472.g3155946c3a-goog From nobody Fri Dec 19 08:25:21 2025 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6F63B3FD4 for ; Sun, 17 Dec 2023 08:09:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--almasrymina.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="XROIEh4b" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-5d42c43d8daso19668647b3.0 for ; Sun, 17 Dec 2023 00:09:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1702800562; x=1703405362; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=cjUSNe+bIsKYLrf1i8KK45ElvyWQbESYTUWNmf5dvpE=; b=XROIEh4bGgdrndOSm9jbSvu/lJDWD00B4DbDZByFo/MxPcJ85or4uECojjFSkMK4ZD ibVrYGUmRtje4D7dI528n7428RlpqK066moAEOtiMdn20OC4eFceePYIoiG8BqQMf5cr FJwF3nPtM+Cqur/gtLfMWqC40KodwQvJNLxRMTXwL+XWLJbTBO6BS+x5dxAWztrlbMjZ 8q80Bzgbl9q2SWhT8i8mLL2HFbQhXooEY114ApKFg3KEariBeItyEKxoNS8TCqeE8hER m+a3KO8Xvi6+8xI6fg4KcsnZb0ixBdNFYjhsgtz6lRco5LuHWPOhfRvc+SpihSp5HlSO zQdg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702800562; x=1703405362; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=cjUSNe+bIsKYLrf1i8KK45ElvyWQbESYTUWNmf5dvpE=; b=A1+vhWgIbSLRZmM1SBIdAKBg6fidSwwdTuUNSys/nFOfSagM2cuaLPCAz7eiOsjM7o RJHMvathAncLFvJJZrOTsjdGFCmmnZgMsINEM4+dUlekOKydSgltw0soOiuugdYyVm1R ZvMqhSRcAQj+73ibJKNOCSYwbNYv9yEr6C1RyasTumHyok/NAPd4DxXvAo04wz2GzxmP BoiwHDebpa9XCWvISSQFsfNenZmPbSyyujygBHpjAk6TV6f1xfs+jW/a1z3Pt5Y4+Dps E3Q82NqUbLkIQqn8LA6VLWotncc0p1wCs1DW+PFYIv79QWbuoSsJwSnopDm5QWjRcvuH oHzQ== X-Gm-Message-State: AOJu0YxbLfHLsLUpF7o7AmSWYmCkucm00RTDnXftAb6guSsim+UsOYBj ALpx0sW+BcxFK7BpuHxlZiUlAp29pBvhH/7LwvWpXuJeGFIn38AEt4VYOhZnAeygD5gPeVl4kT8 zVGMXf+I17nGNHjFd/NPNBrXs5N+W3gGBiYEbt/PQqQj//oWkUOtWDtS60WjhkmP0eGry05moA4 X1q9MHK1g= X-Google-Smtp-Source: AGHT+IEk69ugEQxkS8QhkN2m7ULUqQhtERRZfMer+dPR3UXS3OTGyFS+Wx9Wm+2Kxy1YuSoHcfkw1ezou9Bj9TkSPQ== X-Received: from almasrymina.svl.corp.google.com ([2620:15c:2c4:200:3eb4:e132:f78a:5ba9]) (user=almasrymina job=sendgmr) by 2002:a05:690c:2c83:b0:5e5:d445:d9a9 with SMTP id ep3-20020a05690c2c8300b005e5d445d9a9mr237216ywb.3.1702800562290; Sun, 17 Dec 2023 00:09:22 -0800 (PST) Date: Sun, 17 Dec 2023 00:09:10 -0800 In-Reply-To: <20231217080913.2025973-1-almasrymina@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20231217080913.2025973-1-almasrymina@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20231217080913.2025973-3-almasrymina@google.com> Subject: [PATCH net-next v2 2/3] net: introduce abstraction for network memory From: Mina Almasry To: linux-kernel@vger.kernel.org, netdev@vger.kernel.org, kvm@vger.kernel.org, virtualization@lists.linux.dev Cc: Mina Almasry , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Stefan Hajnoczi , Stefano Garzarella , Jason Gunthorpe , "=?UTF-8?q?Christian=20K=C3=B6nig?=" , Shakeel Butt , Yunsheng Lin , Willem de Bruijn Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add the netmem_t type, an abstraction for network memory. To add support for new memory types to the net stack, we must first abstract the current memory type from the net stack. Currently parts of the net stack use struct page directly: - page_pool - drivers - skb_frag_t Originally the plan was to reuse struct page* for the new memory types, and to set the LSB on the page* to indicate it's not really a page. However, for compiler type checking we need to introduce a new type. netmem_t is introduced to abstract the underlying memory type. Currently it's a no-op abstraction that is always a struct page underneath. In parallel there is an undergoing effort to add support for devmem to the net stack: https://lore.kernel.org/netdev/20231208005250.2910004-1-almasrymina@google.= com/ Signed-off-by: Mina Almasry --- v2: - Use container_of instead of a type cast (David). --- include/net/netmem.h | 35 +++++++++++++++++++++++++++++++++++ 1 file changed, 35 insertions(+) create mode 100644 include/net/netmem.h diff --git a/include/net/netmem.h b/include/net/netmem.h new file mode 100644 index 000000000000..b60b00216704 --- /dev/null +++ b/include/net/netmem.h @@ -0,0 +1,35 @@ +/* SPDX-License-Identifier: GPL-2.0 + * + * netmem.h + * Author: Mina Almasry + * Copyright (C) 2023 Google LLC + */ + +#ifndef _NET_NETMEM_H +#define _NET_NETMEM_H + +struct netmem { + union { + struct page page; + + /* Stub to prevent compiler implicitly converting from page* + * to netmem_t* and vice versa. + * + * Other memory type(s) net stack would like to support + * can be added to this union. + */ + void *addr; + }; +}; + +static inline struct page *netmem_to_page(struct netmem *netmem) +{ + return &netmem->page; +} + +static inline struct netmem *page_to_netmem(struct page *page) +{ + return container_of(page, struct netmem, page); +} + +#endif /* _NET_NETMEM_H */ --=20 2.43.0.472.g3155946c3a-goog From nobody Fri Dec 19 08:25:21 2025 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 13DB86FBE for ; Sun, 17 Dec 2023 08:09:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--almasrymina.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="0bRkKBOJ" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-5caf86963ecso19611337b3.3 for ; Sun, 17 Dec 2023 00:09:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1702800565; x=1703405365; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=QMKZfo7MQTdL4MON6ILyvITz6v7ECb8JCcQoDm+y1oY=; b=0bRkKBOJDpybN9GSEOZMp6fck5wCaT1lw6jw7kuhcteHSpDV/5tWVryweSBb/8CsCL 6ET7x+fFPI3lO+KU/+0QLeO4bhHriXyGdyScbaveaiGzObU1ul84U5fMwSRSlYb40zX9 Xi/TQALPe6qbSDVmI6sx1JKDJxgb5MMLy17mI+41L2h1rz0hrzDA9XUwloQnlRZZCm01 4ysNKVtdnbT6G+wbt1FngwxXzbIXuS0u5PK6huBYjZvObdSwSOQwf2fKs1jqgGFwkwA6 xZNaTAeysDBrPGaRZOH8xkH8kLlnck7Kde1OP0ws5Oy8gB1yV8EAVVg0Ip8lrbGODeTO JexA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702800565; x=1703405365; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=QMKZfo7MQTdL4MON6ILyvITz6v7ECb8JCcQoDm+y1oY=; b=UKCbrpZOcsM4U8NyacgJmElzAdpmQ/SDtJso97KFhBB0aLA1CPIvdOx227pfHkQDin qo5gY+h7SMyiZ8e5+c7N/7dT4cdSE82lFZ4QwPMFt8t6x5+DGCTh9VdzWXV4bzXnuA7C 93Osm6iFUv+AoJKmFdcxKy1f9LL+r2LFHBiGuzGmCLW6jwu6vdoO5PHl+rcyynN/Q3+W v62rfQVi6J11p51FWnHMFmWX1fgoYekKZrI44ZKOoIeT8IhZlJpP9juXakREoLQ/fr0n VoZ7kqpLjexkOjxyo/k8AIrn77JiPP99YLEOMSFRPi4JOA/ATp2TFxsX0gwVfKHlqf9c ucqg== X-Gm-Message-State: AOJu0YxknjtEFE3HsnjmvcO0FIKJal5O1da9jpwp0S/f3X+ylNJq4AYa jucZ47U4wls2y5xmpMQmWS9irTOciuBlFEt+cerg0oC0hf/6app42Rc+JMFR0NkxfancUQn0+XS YZk+HuDuBNpUSu8IuBmNTAZlF6Z3iHsUTXlipfCl8oE4BzlRE0m8pRF99haoqMAMulRNVQr5EwX 5IhvnvRDA= X-Google-Smtp-Source: AGHT+IFJCnycDJwsU67lKu1qB4q73cruRdEMF2MdI6NqbIK/eO/tqFob5FbZApzAiGPawGrZ+fw8MgSBnnOaG1MLLw== X-Received: from almasrymina.svl.corp.google.com ([2620:15c:2c4:200:3eb4:e132:f78a:5ba9]) (user=almasrymina job=sendgmr) by 2002:a05:690c:c02:b0:5e6:2989:a661 with SMTP id cl2-20020a05690c0c0200b005e62989a661mr94697ywb.10.1702800564661; Sun, 17 Dec 2023 00:09:24 -0800 (PST) Date: Sun, 17 Dec 2023 00:09:11 -0800 In-Reply-To: <20231217080913.2025973-1-almasrymina@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20231217080913.2025973-1-almasrymina@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20231217080913.2025973-4-almasrymina@google.com> Subject: [PATCH net-next v2 3/3] net: add netmem_t to skb_frag_t From: Mina Almasry To: linux-kernel@vger.kernel.org, netdev@vger.kernel.org, kvm@vger.kernel.org, virtualization@lists.linux.dev Cc: Mina Almasry , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Stefan Hajnoczi , Stefano Garzarella , Jason Gunthorpe , "=?UTF-8?q?Christian=20K=C3=B6nig?=" , Shakeel Butt , Yunsheng Lin , Willem de Bruijn Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Use netmem_t instead of page directly in skb_frag_t. Currently netmem_t is always a struct page underneath, but the abstraction allows efforts to add support for skb frags not backed by pages. There is unfortunately 1 instance where the skb_frag_t is assumed to be a bio_vec in kcm. For this case, add a debug assert that the skb frag is indeed backed by a page, and do a cast. Add skb[_frag]_fill_netmem_*() and skb_add_rx_frag_netmem() helpers so that the API can be used to create netmem skbs. Signed-off-by: Mina Almasry --- v2: - Add skb frag filling helpers. --- include/linux/skbuff.h | 70 ++++++++++++++++++++++++++++++++---------- net/core/skbuff.c | 22 +++++++++---- net/kcm/kcmsock.c | 10 ++++-- 3 files changed, 78 insertions(+), 24 deletions(-) diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 7ce38874dbd1..03ab13072962 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -37,6 +37,7 @@ #endif #include #include +#include =20 /** * DOC: skb checksums @@ -359,7 +360,11 @@ extern int sysctl_max_skb_frags; */ #define GSO_BY_FRAGS 0xFFFF =20 -typedef struct bio_vec skb_frag_t; +typedef struct skb_frag { + struct netmem *bv_page; + unsigned int bv_len; + unsigned int bv_offset; +} skb_frag_t; =20 /** * skb_frag_size() - Returns the size of a skb fragment @@ -2431,22 +2436,37 @@ static inline unsigned int skb_pagelen(const struct= sk_buff *skb) return skb_headlen(skb) + __skb_pagelen(skb); } =20 +static inline void skb_frag_fill_netmem_desc(skb_frag_t *frag, + struct netmem *netmem, int off, + int size) +{ + frag->bv_page =3D netmem; + frag->bv_offset =3D off; + skb_frag_size_set(frag, size); +} + static inline void skb_frag_fill_page_desc(skb_frag_t *frag, struct page *page, int off, int size) { - frag->bv_page =3D page; - frag->bv_offset =3D off; - skb_frag_size_set(frag, size); + skb_frag_fill_netmem_desc(frag, page_to_netmem(page), off, size); +} + +static inline void __skb_fill_netmem_desc_noacc(struct skb_shared_info *sh= info, + int i, struct netmem *netmem, + int off, int size) +{ + skb_frag_t *frag =3D &shinfo->frags[i]; + + skb_frag_fill_netmem_desc(frag, netmem, off, size); } =20 static inline void __skb_fill_page_desc_noacc(struct skb_shared_info *shin= fo, int i, struct page *page, int off, int size) { - skb_frag_t *frag =3D &shinfo->frags[i]; - - skb_frag_fill_page_desc(frag, page, off, size); + __skb_fill_netmem_desc_noacc(shinfo, i, page_to_netmem(page), off, + size); } =20 /** @@ -2462,10 +2482,10 @@ static inline void skb_len_add(struct sk_buff *skb,= int delta) } =20 /** - * __skb_fill_page_desc - initialise a paged fragment in an skb + * __skb_fill_netmem_desc - initialise a paged fragment in an skb * @skb: buffer containing fragment to be initialised * @i: paged fragment index to initialise - * @page: the page to use for this fragment + * @netmem: the netmem to use for this fragment * @off: the offset to the data with @page * @size: the length of the data * @@ -2474,10 +2494,13 @@ static inline void skb_len_add(struct sk_buff *skb,= int delta) * * Does not take any additional reference on the fragment. */ -static inline void __skb_fill_page_desc(struct sk_buff *skb, int i, - struct page *page, int off, int size) +static inline void __skb_fill_netmem_desc(struct sk_buff *skb, int i, + struct netmem *netmem, int off, + int size) { - __skb_fill_page_desc_noacc(skb_shinfo(skb), i, page, off, size); + struct page *page =3D netmem_to_page(netmem); + + __skb_fill_netmem_desc_noacc(skb_shinfo(skb), i, netmem, off, size); =20 /* Propagate page pfmemalloc to the skb if we can. The problem is * that not all callers have unique ownership of the page but rely @@ -2485,7 +2508,21 @@ static inline void __skb_fill_page_desc(struct sk_bu= ff *skb, int i, */ page =3D compound_head(page); if (page_is_pfmemalloc(page)) - skb->pfmemalloc =3D true; + skb->pfmemalloc =3D true; +} + +static inline void __skb_fill_page_desc(struct sk_buff *skb, int i, + struct page *page, int off, int size) +{ + __skb_fill_netmem_desc(skb, i, page_to_netmem(page), off, size); +} + +static inline void skb_fill_netmem_desc(struct sk_buff *skb, int i, + struct netmem *netmem, int off, + int size) +{ + __skb_fill_netmem_desc(skb, i, netmem, off, size); + skb_shinfo(skb)->nr_frags =3D i + 1; } =20 /** @@ -2505,8 +2542,7 @@ static inline void __skb_fill_page_desc(struct sk_buf= f *skb, int i, static inline void skb_fill_page_desc(struct sk_buff *skb, int i, struct page *page, int off, int size) { - __skb_fill_page_desc(skb, i, page, off, size); - skb_shinfo(skb)->nr_frags =3D i + 1; + skb_fill_netmem_desc(skb, i, page_to_netmem(page), off, size); } =20 /** @@ -2532,6 +2568,8 @@ static inline void skb_fill_page_desc_noacc(struct sk= _buff *skb, int i, =20 void skb_add_rx_frag(struct sk_buff *skb, int i, struct page *page, int of= f, int size, unsigned int truesize); +void skb_add_rx_frag_netmem(struct sk_buff *skb, int i, struct netmem *net= mem, + int off, int size, unsigned int truesize); =20 void skb_coalesce_rx_frag(struct sk_buff *skb, int i, int size, unsigned int truesize); @@ -3422,7 +3460,7 @@ static inline void skb_frag_off_copy(skb_frag_t *frag= to, */ static inline struct page *skb_frag_page(const skb_frag_t *frag) { - return frag->bv_page; + return netmem_to_page(frag->bv_page); } =20 /** diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 83af8aaeb893..053d220aa2f2 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -845,16 +845,24 @@ struct sk_buff *__napi_alloc_skb(struct napi_struct *= napi, unsigned int len, } EXPORT_SYMBOL(__napi_alloc_skb); =20 -void skb_add_rx_frag(struct sk_buff *skb, int i, struct page *page, int of= f, - int size, unsigned int truesize) +void skb_add_rx_frag_netmem(struct sk_buff *skb, int i, struct netmem *net= mem, + int off, int size, unsigned int truesize) { DEBUG_NET_WARN_ON_ONCE(size > truesize); =20 - skb_fill_page_desc(skb, i, page, off, size); + skb_fill_netmem_desc(skb, i, netmem, off, size); skb->len +=3D size; skb->data_len +=3D size; skb->truesize +=3D truesize; } +EXPORT_SYMBOL(skb_add_rx_frag_netmem); + +void skb_add_rx_frag(struct sk_buff *skb, int i, struct page *page, int of= f, + int size, unsigned int truesize) +{ + skb_add_rx_frag_netmem(skb, i, page_to_netmem(page), off, size, + truesize); +} EXPORT_SYMBOL(skb_add_rx_frag); =20 void skb_coalesce_rx_frag(struct sk_buff *skb, int i, int size, @@ -1868,10 +1876,11 @@ int skb_copy_ubufs(struct sk_buff *skb, gfp_t gfp_m= ask) =20 /* skb frags point to kernel buffers */ for (i =3D 0; i < new_frags - 1; i++) { - __skb_fill_page_desc(skb, i, head, 0, psize); + __skb_fill_netmem_desc(skb, i, page_to_netmem(head), 0, psize); head =3D (struct page *)page_private(head); } - __skb_fill_page_desc(skb, new_frags - 1, head, 0, d_off); + __skb_fill_netmem_desc(skb, new_frags - 1, page_to_netmem(head), 0, + d_off); skb_shinfo(skb)->nr_frags =3D new_frags; =20 release: @@ -3609,7 +3618,8 @@ skb_zerocopy(struct sk_buff *to, struct sk_buff *from= , int len, int hlen) if (plen) { page =3D virt_to_head_page(from->head); offset =3D from->data - (unsigned char *)page_address(page); - __skb_fill_page_desc(to, 0, page, offset, plen); + __skb_fill_netmem_desc(to, 0, page_to_netmem(page), + offset, plen); get_page(page); j =3D 1; len -=3D plen; diff --git a/net/kcm/kcmsock.c b/net/kcm/kcmsock.c index 65d1f6755f98..5c46db045f4c 100644 --- a/net/kcm/kcmsock.c +++ b/net/kcm/kcmsock.c @@ -636,9 +636,15 @@ static int kcm_write_msgs(struct kcm_sock *kcm) for (i =3D 0; i < skb_shinfo(skb)->nr_frags; i++) msize +=3D skb_shinfo(skb)->frags[i].bv_len; =20 + /* The cast to struct bio_vec* here assumes the frags are + * struct page based. + */ + DEBUG_NET_WARN_ON_ONCE( + !skb_frag_page(&skb_shinfo(skb)->frags[0])); + iov_iter_bvec(&msg.msg_iter, ITER_SOURCE, - skb_shinfo(skb)->frags, skb_shinfo(skb)->nr_frags, - msize); + (const struct bio_vec *)skb_shinfo(skb)->frags, + skb_shinfo(skb)->nr_frags, msize); iov_iter_advance(&msg.msg_iter, txm->frag_offset); =20 do { --=20 2.43.0.472.g3155946c3a-goog