[RFC PATCH 0/2] Speed up f2fs truncate

Yi Sun posted 2 patches 1 month, 1 week ago
There is a newer version of this series
fs/f2fs/compress.c |  14 ++++++
fs/f2fs/f2fs.h     |   5 ++
fs/f2fs/file.c     |  34 ++++++++++++-
fs/f2fs/segment.c  | 116 +++++++++++++++++++++++++++++++--------------
4 files changed, 133 insertions(+), 36 deletions(-)
[RFC PATCH 0/2] Speed up f2fs truncate
Posted by Yi Sun 1 month, 1 week ago
Deleting large files is time-consuming, and a large part
of the time is spent in f2fs_invalidate_blocks()
->down_write(sit_info->sentry_lock) and up_write().

If some blocks are continuous and belong to the same segment,
we can process these blocks at the same time. This can reduce
the number of calls to the down_write() and the up_write(),
thereby improving the overall speed of doing truncate.

Test steps:
Set the CPU and DDR frequencies to the maximum.
dd if=/dev/random of=./test.txt bs=1M count=100000
sync
rm test.txt

Time Comparison of rm:
original        optimization            ratio
7.17s		3.27s			54.39%

Hi, currently I have only optimized the f2fs doing truncate route,
and other functions using f2fs_invalidate_blocks() are not taken
into consideration. So new function
f2fs_invalidate_compress_pages_range() and
check_f2fs_invalidate_consecutive_blocks() are not general functions.
Is this modification acceptable?

Yi Sun (2):
  f2fs: introduce update_sit_entry_for_release()
  f2fs: introduce f2fs_invalidate_consecutive_blocks() for truncate

 fs/f2fs/compress.c |  14 ++++++
 fs/f2fs/f2fs.h     |   5 ++
 fs/f2fs/file.c     |  34 ++++++++++++-
 fs/f2fs/segment.c  | 116 +++++++++++++++++++++++++++++++--------------
 4 files changed, 133 insertions(+), 36 deletions(-)

-- 
2.25.1