Deleting large files is time-consuming, and a large part
of the time is spent in f2fs_invalidate_blocks()
->down_write(sit_info->sentry_lock) and up_write().
If some blocks are continuous, we can process these blocks
at the same time. This can reduce the number of calls to
the down_write() and the up_write(), thereby improving the
overall speed of doing truncate.
Test steps:
Set the CPU and DDR frequencies to the maximum.
dd if=/dev/random of=./test.txt bs=1M count=100000
sync
rm test.txt
Time Comparison of rm:
original optimization ratio
7.17s 3.27s 54.39%
Yi Sun (5):
f2fs: expand f2fs_invalidate_compress_page() to
f2fs_invalidate_compress_pages_range()
f2fs: add parameter @len to f2fs_invalidate_internal_cache()
f2fs: introduce update_sit_entry_for_release()
f2fs: add parameter @len to f2fs_invalidate_blocks()
f2fs: Optimize f2fs_truncate_data_blocks_range()
fs/f2fs/compress.c | 9 +--
fs/f2fs/data.c | 2 +-
fs/f2fs/f2fs.h | 16 +++---
fs/f2fs/file.c | 78 ++++++++++++++++++++++---
fs/f2fs/gc.c | 2 +-
fs/f2fs/node.c | 4 +-
fs/f2fs/segment.c | 139 +++++++++++++++++++++++++++++++--------------
7 files changed, 184 insertions(+), 66 deletions(-)
--
2.25.1