net/core/skbuff.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
If *len is equal to 0 at the beginning of __splice_segment
it returns true directly. But when decreasing *len from
a positive number to 0 in __splice_segment, it returns false.
The caller needs to call __splice_segment again.
Recheck *len if it changes, return true in time.
Reduce unnecessary calls to __splice_segment.
Signed-off-by: Pengtao He <hept.hept.hept@gmail.com>
---
v4:
Correct the commit message.
v3:
Reduce once condition evaluation.
v2:
Correct the commit message and target tree.
v1:
https://lore.kernel.org/netdev/20250723063119.24059-1-hept.hept.hept@gmail.com/
---
net/core/skbuff.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index ee0274417948..23b776cd9879 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -3112,7 +3112,9 @@ static bool __splice_segment(struct page *page, unsigned int poff,
poff += flen;
plen -= flen;
*len -= flen;
- } while (*len && plen);
+ if (!*len)
+ return true;
+ } while (plen);
return false;
}
--
2.49.0
Pengtao He wrote: > If *len is equal to 0 at the beginning of __splice_segment > it returns true directly. But when decreasing *len from > a positive number to 0 in __splice_segment, it returns false. > The caller needs to call __splice_segment again. > > Recheck *len if it changes, return true in time. > Reduce unnecessary calls to __splice_segment. Fix is a strong term. The existing behavior is correct, it just takes an extra pass through the loop in caller __skb_splice_bits. As also indicated by this patch targeting net-next. I would suggest something like "net: avoid one loop iteration in __skb_splice_bits" > Signed-off-by: Pengtao He <hept.hept.hept@gmail.com> > --- > v4: > Correct the commit message. > v3: > Reduce once condition evaluation. > v2: > Correct the commit message and target tree. > v1: > https://lore.kernel.org/netdev/20250723063119.24059-1-hept.hept.hept@gmail.com/ > --- > net/core/skbuff.c | 4 +++- > 1 file changed, 3 insertions(+), 1 deletion(-) > > diff --git a/net/core/skbuff.c b/net/core/skbuff.c > index ee0274417948..23b776cd9879 100644 > --- a/net/core/skbuff.c > +++ b/net/core/skbuff.c > @@ -3112,7 +3112,9 @@ static bool __splice_segment(struct page *page, unsigned int poff, > poff += flen; > plen -= flen; > *len -= flen; > - } while (*len && plen); > + if (!*len) > + return true; > + } while (plen); > > return false; > } > -- > 2.49.0 >
> Pengtao He wrote: > > If *len is equal to 0 at the beginning of __splice_segment > > it returns true directly. But when decreasing *len from > > a positive number to 0 in __splice_segment, it returns false. > > The caller needs to call __splice_segment again. > > > > Recheck *len if it changes, return true in time. > > Reduce unnecessary calls to __splice_segment. > > Fix is a strong term. The existing behavior is correct, it just takes > an extra pass through the loop in caller __skb_splice_bits. As also > indicated by this patch targeting net-next. > > I would suggest something like "net: avoid one loop iteration in __skb_splice_bits" Thanks for the detailed and good suggestion. I will correct it. > > > > Signed-off-by: Pengtao He <hept.hept.hept@gmail.com> > > --- > > v4: > > Correct the commit message. > > v3: > > Reduce once condition evaluation. > > v2: > > Correct the commit message and target tree. > > v1: > > https://lore.kernel.org/netdev/20250723063119.24059-1-hept.hept.hept@gmail.com/ > > --- > > net/core/skbuff.c | 4 +++- > > 1 file changed, 3 insertions(+), 1 deletion(-) > > > > diff --git a/net/core/skbuff.c b/net/core/skbuff.c > > index ee0274417948..23b776cd9879 100644 > > --- a/net/core/skbuff.c > > +++ b/net/core/skbuff.c > > @@ -3112,7 +3112,9 @@ static bool __splice_segment(struct page *page, unsigned int poff, > > poff += flen; > > plen -= flen; > > *len -= flen; > > - } while (*len && plen); > > + if (!*len) > > + return true; > > + } while (plen); > > > > return false; > > } > > -- > > 2.49.0 > >
© 2016 - 2025 Red Hat, Inc.