[PATCH v2 0/5] Speed up f2fs truncate

Yi Sun posted 5 patches 3 weeks, 4 days ago
There is a newer version of this series
fs/f2fs/compress.c | 11 +++---
fs/f2fs/data.c     |  2 +-
fs/f2fs/f2fs.h     | 16 +++++----
fs/f2fs/file.c     | 78 ++++++++++++++++++++++++++++++++++++++----
fs/f2fs/gc.c       |  2 +-
fs/f2fs/node.c     |  4 +--
fs/f2fs/segment.c  | 84 +++++++++++++++++++++++++++++++++++++++-------
7 files changed, 161 insertions(+), 36 deletions(-)
[PATCH v2 0/5] Speed up f2fs truncate
Posted by Yi Sun 3 weeks, 4 days ago
Deleting large files is time-consuming, and a large part
of the time is spent in f2fs_invalidate_blocks()
->down_write(sit_info->sentry_lock) and up_write().

If some blocks are continuous, we can process these blocks
at the same time. This can reduce the number of calls to
the down_write() and the up_write(), thereby improving the
overall speed of doing truncate.

Test steps:
Set the CPU and DDR frequencies to the maximum.
dd if=/dev/random of=./test.txt bs=1M count=100000
sync
rm test.txt

Time Comparison of rm:
original        optimization            ratio
7.17s           3.27s                   54.39%

Yi Sun (5):
  f2fs: blocks need to belong to the same segment when using
    update_sit_entry()
  f2fs: expand f2fs_invalidate_compress_page() to
    f2fs_invalidate_compress_pages_range()
  f2fs: add parameter @len to f2fs_invalidate_internal_cache()
  f2fs: add parameter @len to f2fs_invalidate_blocks()
  f2fs: Optimize f2fs_truncate_data_blocks_range()

 fs/f2fs/compress.c | 11 +++---
 fs/f2fs/data.c     |  2 +-
 fs/f2fs/f2fs.h     | 16 +++++----
 fs/f2fs/file.c     | 78 ++++++++++++++++++++++++++++++++++++++----
 fs/f2fs/gc.c       |  2 +-
 fs/f2fs/node.c     |  4 +--
 fs/f2fs/segment.c  | 84 +++++++++++++++++++++++++++++++++++++++-------
 7 files changed, 161 insertions(+), 36 deletions(-)

-- 
2.25.1