fs/9p/fid.h | 3 +-- fs/9p/vfs_inode.c | 1 + fs/9p/vfs_inode_dotl.c | 6 +++--- 3 files changed, 5 insertions(+), 5 deletions(-)
Hi Eric, Dominique,
Here are some netfslib-related changes we might want to consider applying
to 9p:
(1) Enable large folio support for 9p. This is handled entirely by
netfslib and is already supported in afs. I wonder if we should limit
the maximum folio size to 1MiB to match the maximum I/O size in the 9p
protocol.
(2) Make better use of netfslib's writethrough caching support by not
disabling caching for O_DSYNC. netfs_perform_write() will set up
and dispatch write requests as it copies data into the pagecache.
(3) Always update netfs_inode::remote_size to reflect what we think the
server's idea of the file size is. This is separate from
inode::i_size which is our idea of what it should be if all of our
outstanding dirty data is committed.
The patches can also be found here:
https://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs.git/log/?h=netfs-9p
Thanks,
David
David Howells (2):
9p: Make better use of netfslib's writethrough caching
9p: Always update remote_i_size in stat2inode
Dominique Martinet (1):
9p: Enable large folio support
fs/9p/fid.h | 3 +--
fs/9p/vfs_inode.c | 1 +
fs/9p/vfs_inode_dotl.c | 6 +++---
3 files changed, 5 insertions(+), 5 deletions(-)
On Monday, January 29, 2024 12:54:34 PM CET David Howells wrote: > Hi Eric, Dominique, > > Here are some netfslib-related changes we might want to consider applying > to 9p: > > (1) Enable large folio support for 9p. This is handled entirely by > netfslib and is already supported in afs. I wonder if we should limit > the maximum folio size to 1MiB to match the maximum I/O size in the 9p > protocol. The limit depends on user's 'msize' 9p client option and on the 9p transport implementation. The hard limit with virtio transport for instance is currently just 500k (patches for virtio 4MB limit fetching dust unfortunately). Would you see an advantage to limit folio size? I mean p9_client_read() etc. are automatically limiting the read/write chunk size accordingly. > (2) Make better use of netfslib's writethrough caching support by not > disabling caching for O_DSYNC. netfs_perform_write() will set up > and dispatch write requests as it copies data into the pagecache. > > (3) Always update netfs_inode::remote_size to reflect what we think the > server's idea of the file size is. This is separate from > inode::i_size which is our idea of what it should be if all of our > outstanding dirty data is committed. > > The patches can also be found here: > > https://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs.git/log/?h=netfs-9p > > Thanks, > David > > David Howells (2): > 9p: Make better use of netfslib's writethrough caching > 9p: Always update remote_i_size in stat2inode > > Dominique Martinet (1): > 9p: Enable large folio support > > fs/9p/fid.h | 3 +-- > fs/9p/vfs_inode.c | 1 + > fs/9p/vfs_inode_dotl.c | 6 +++--- > 3 files changed, 5 insertions(+), 5 deletions(-)
Christian Schoenebeck <linux_oss@crudebyte.com> wrote: > > (1) Enable large folio support for 9p. This is handled entirely by > > netfslib and is already supported in afs. I wonder if we should limit > > the maximum folio size to 1MiB to match the maximum I/O size in the 9p > > protocol. > > The limit depends on user's 'msize' 9p client option and on the 9p transport > implementation. The hard limit with virtio transport for instance is currently > just 500k (patches for virtio 4MB limit fetching dust unfortunately). Okay. Is that 500KiB or 512Kib? > Would you see an advantage to limit folio size? I mean p9_client_read() etc. > are automatically limiting the read/write chunk size accordingly. For reads not so much, but for writes it would mean that a dirty folio is either entirely written or entirely failed. I don't know how important this would be for the 9p usecases. David
On Monday, January 29, 2024 3:22:15 PM CET David Howells wrote: > Christian Schoenebeck <linux_oss@crudebyte.com> wrote: > > > > (1) Enable large folio support for 9p. This is handled entirely by > > > netfslib and is already supported in afs. I wonder if we should limit > > > the maximum folio size to 1MiB to match the maximum I/O size in the 9p > > > protocol. > > > > The limit depends on user's 'msize' 9p client option and on the 9p transport > > implementation. The hard limit with virtio transport for instance is currently > > just 500k (patches for virtio 4MB limit fetching dust unfortunately). > > Okay. Is that 500KiB or 512Kib? 'msize' is currently hard limited by virtio transport to exactly 512000. For rdma and fd transports it's both exactly 1MiB. For xen transport it should be exactly 524288 (could be lowered though depending on configured xen ring size). You find the individual transports to fill the field 'maxsize' accordingly (in net/9p/trans_*.c). So that's the maximum message size. Then the individual 9p message header size needs to be subtracted. For Twrite request that's -23, for Rread response that's -11. > > Would you see an advantage to limit folio size? I mean p9_client_read() etc. > > are automatically limiting the read/write chunk size accordingly. > > For reads not so much, but for writes it would mean that a dirty folio is > either entirely written or entirely failed. I don't know how important this > would be for the 9p usecases. > > David > >
© 2016 - 2025 Red Hat, Inc.