[PATCH v8 0/7] 9pfs: readdir optimization

Christian Schoenebeck posted 7 patches 3 years, 8 months ago
Test docker-quick@centos7 failed
Test docker-mingw@fedora failed
Test checkpatch failed
Test FreeBSD failed
Patches applied successfully (tree, apply log)
git fetch https://github.com/patchew-project/qemu tags/patchew/cover.1596012787.git.qemu_oss@crudebyte.com
Maintainers: Greg Kurz <groug@kaod.org>, Laurent Vivier <lvivier@redhat.com>, Christian Schoenebeck <qemu_oss@crudebyte.com>, Thomas Huth <thuth@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>
hw/9pfs/9p.c                 | 159 ++++++++++++++--------------
hw/9pfs/9p.h                 |  50 ++++++++-
hw/9pfs/codir.c              | 196 +++++++++++++++++++++++++++++++++--
hw/9pfs/coth.h               |  15 ++-
tests/qtest/virtio-9p-test.c | 108 +++++++++++++++++++
5 files changed, 434 insertions(+), 94 deletions(-)
[PATCH v8 0/7] 9pfs: readdir optimization
Posted by Christian Schoenebeck 3 years, 8 months ago
As previously mentioned, I was investigating performance issues with 9pfs.
Raw file read/write of 9pfs is actually quite good, provided that client
picked a reasonable high msize (maximum message size). I would recommend
to log a warning on 9p server side if a client attached with a small msize
that would cause performance issues for that reason.

However there are other aspects where 9pfs currently performs suboptimally,
especially readdir handling of 9pfs is extremely slow, a simple readdir
request of a guest typically blocks for several hundred milliseconds or
even several seconds, no matter how powerful the underlying hardware is.
The reason for this performance issue: latency.
Currently 9pfs is heavily dispatching a T_readdir request numerous times
between main I/O thread and a background I/O thread back and forth; in fact
it is actually hopping between threads even multiple times for every single
directory entry during T_readdir request handling which leads in total to
huge latencies for a single T_readdir request.

This patch series aims to address this severe performance issue of 9pfs
T_readdir request handling. The actual performance optimization is patch 5.

v7->v8:

  * Split previous patch 3 into two patches [patch 3], [patch 4].

  * Error out if a 9p2000.u client sends a Treaddir request, likewise error
    out if a 9p2000.L client sends a Tread request on a directory [patch 6].

Unchanged patches: [patch 1], [patch 2], [patch 5], [patch 7].

Message-ID of previous version (v7):
  cover.1595166227.git.qemu_oss@crudebyte.com

Message-ID of version with performance benchmark (v4):
  cover.1579567019.git.qemu_oss@crudebyte.com

Christian Schoenebeck (7):
  tests/virtio-9p: added split readdir tests
  9pfs: make v9fs_readdir_response_size() public
  9pfs: split out fs driver core of v9fs_co_readdir()
  9pfs: add new function v9fs_co_readdir_many()
  9pfs: T_readdir latency optimization
  9pfs: differentiate readdir lock between 9P2000.u vs. 9P2000.L
  9pfs: clarify latency of v9fs_co_run_in_worker()

 hw/9pfs/9p.c                 | 159 ++++++++++++++--------------
 hw/9pfs/9p.h                 |  50 ++++++++-
 hw/9pfs/codir.c              | 196 +++++++++++++++++++++++++++++++++--
 hw/9pfs/coth.h               |  15 ++-
 tests/qtest/virtio-9p-test.c | 108 +++++++++++++++++++
 5 files changed, 434 insertions(+), 94 deletions(-)

-- 
2.20.1