fs/xfs/xfs_file.c | 71 ++++++++++++++++++++++++++++++++++++++++++---- fs/xfs/xfs_inode.h | 6 ++++ 2 files changed, 72 insertions(+), 5 deletions(-)
From: Chi Zhiling <chizhiling@kylinos.cn> This is a patch attempting to implement concurrent buffered writes. The main idea is to use the folio lock to ensure the atomicity of the write when writing to a single folio, instead of using the i_rwsem. I tried the "folio batch" solution, which is a great idea, but during testing, I encountered an OOM issue because the locked folios couldn't be reclaimed. So for now, I can only allow concurrent writes within a single block. The good news is that since we already support BS > PS, we can use a larger block size to enable higher granularity concurrency. These ideas come from previous discussions: https://lore.kernel.org/all/953b0499-5832-49dc-8580-436cf625db8c@163.com/ Chi Zhiling (2): xfs: Add i_direct_mode to indicate the IO mode of inode xfs: Enable concurrency when writing within single block fs/xfs/xfs_file.c | 71 ++++++++++++++++++++++++++++++++++++++++++---- fs/xfs/xfs_inode.h | 6 ++++ 2 files changed, 72 insertions(+), 5 deletions(-) -- 2.43.0
On Fri, Apr 25, 2025 at 06:38:39PM +0800, Chi Zhiling wrote:
> From: Chi Zhiling <chizhiling@kylinos.cn>
>
> This is a patch attempting to implement concurrent buffered writes.
> The main idea is to use the folio lock to ensure the atomicity of the
> write when writing to a single folio, instead of using the i_rwsem.
>
> I tried the "folio batch" solution, which is a great idea, but during
> testing, I encountered an OOM issue because the locked folios couldn't
> be reclaimed.
>
> So for now, I can only allow concurrent writes within a single block.
> The good news is that since we already support BS > PS, we can use a
> larger block size to enable higher granularity concurrency.
I'm not going to say no to this, but I think it's a short term and
niche solution to the general problem of enabling shared buffered
writes. i.e. I expect that it will not exist for long, whilst
experience tells me that adding special cases to the IO path locking
has a fairly high risk of unexpected regressions and/or data
corruption....
> These ideas come from previous discussions:
> https://lore.kernel.org/all/953b0499-5832-49dc-8580-436cf625db8c@163.com/
In my spare time I've been looking at using the two state lock from
bcachefs for this because it looks to provide a general solution to
the issue of concurrent buffered writes.
The two valid IO exclusion states are:
+enum {
+ XFS_IOTYPE_BUFFERED = 0,
+ XFS_IOTYPE_DIRECT = 1,
+};
Importantly, this gives us three states, not two:
1. Buffered IO in progress,
2. Direct IO in progress, and
3. No IO in progress. (i.e. not held at all)
When we do operations like truncate or hole punch, we need the state
to be #3 - no IO in progress.
Hence we can use this like we currently use i_dio_count for
truncate with the correct lock ordering. That is, we order the
IOLOCK before the IOTYPE lock:
Buffered IO:
IOLOCK_SHARED, IOLOCK_EXCL if IREMAPPING
<IREMAPPING excluded>
IOTYPE_BUFFERED
<block waiting for in progress DIO>
<do buffered IO>
unlock IOTYPE_BUFFERED
unlock IOLOCK
IREMAPPING IO:
IOLOCK_EXCL
set IREMAPPING
demote to IOLOCK_SHARED
IOTYPE_BUFFERED
<block waiting for in progress DIO>
<do reflink operation>
unlock IOTYPE_BUFFERED
clear IREMAPPING
unlock IOLOCK
Direct IO:
IOLOCK_SHARED
IOTYPE_DIRECT
<block waiting for in progress buffered, IREMAPPING>
<do direct IO>
<submission>
unlock IOLOCK_SHARED
<completion>
unlock IOTYPE_DIRECT
Notes on DIO write file extension w.r.t. xfs_file_write_zero_eof():
- xfs_file_write_zero_eof() does buffered IO.
- needs to switch from XFS_IOTYPE_DIRECT to XFS_IOTYPE_BUFFERED
- this locks out all other DIO, as the current switch to
IOLOCK_EXCL will do.
- DIO write path no longer needs IOLOCK_EXCL to serialise post-EOF
block zeroing against other concurrent DIO writes.
- future optimisation target so that it doesn't serialise against
other DIO (reads or writes) within EOF.
This path looks like:
Direct IO extension:
IOLOCK_EXCL
IOTYPE_BUFFERED
<block waiting for in progress DIO>
xfs_file_write_zero_eof();
demote to IOLOCK_SHARED
IOTYPE_DIRECT
<block waiting for buffered, IREMAPPING>
<do direct IO>
<submission>
unlock IOLOCK_SHARED
<completion>
unlock IOTYPE_DIRECT
Notes on xfs_file_dio_write_unaligned()
- this drains all DIO in flight so it has exclusive access to the
given block being written to. This prevents races doing IO (read
or write, buffered or direct) to that specific block.
- essentially does an exclusive, synchronous DIO write after
draining all DIO in flight. Very slow, reliant on inode_dio_wait()
existing.
- make the slow path after failing the unaligned overwrite a
buffered write.
- switching modes to buffered drains all the DIO in flight,
buffered write data all the necessary sub-block zeroing in memory,
next overlapping DIO of fdatasync() will flush it to disk.
This slow path looks like:
IOLOCK_EXCL
IOTYPE_BUFFERED
<excludes all concurrent DIO>
set IOCB_DONTCACHE
iomap_file_buffered_write()
Truncate and other IO exclusion code such as fallocate() need to do
this:
IOLOCK_EXCL
<wait for IO state to become unlocked>
The IOLOCK_EXCL creates a submission barrier, and the "wait for IO
state to become unlocked" ensures that all buffered and direct IO
have been drained and there is no IO in flight at all.
Th upside of this is that we get rid of the dependency on
inode->i_dio_count and we ensure that we don't potentially need a
similar counter for buffered writes in future. e.g. buffered
AIO+RWF_DONTCACHE+RWF_DSYNC could be optimised to use FUA and/or IO
completion side DSYNC operations like AIO+DIO+RWF_DSYNC currently
does and that would currently need in-flight IO tracking for truncate
synchronisation. The two-state lock solution avoids that completely.
Some work needs to be done to enable sane IO completion unlocking
(i.e. from dio->end_io). My curent notes on this say:
- ->end_io only gets called once when all bios submitted for the dio
are complete. hence only one completion, so unlock is balanced
- caller has no idea on error if IO was submitted and completed;
if dio->end_io unlocks on IO error, the waiting submitter has no
clue whether it has to unlock or not.
- need a clean submitter unlock model. Alternatives?
- dio->end_io only unlock on on IO error when
dio->wait_for_completion is not set (i.e. completing an AIO,
submitter was given -EIOCBQUEUED). iomap_dio_rw() caller can
then do:
if (ret < 0 && ret != -EIOCBQUEUED) {
/* unlock inode */
}
- if end_io is checking ->wait_for_completion, only ever unlock
if it isn't set? i.e. if there is a waiter, we leave it to them
to unlock? Simpler rule for ->end_io, cleaner for the submitter
to handle:
if (ret != -EIOCBQUEUED) {
/* unlock inode */
}
- need to move DIO write page cache invalidation and inode_dio_end()
into ->end_io for implementations
- if no ->end_io provided, do what the current code does.
There are also a few changes need to avoid inode->i_dio_count in
iomap:
- need a flag to tell iomap_dio_rw() not to account the DIO
- inode_dio_end() may need to be moved to ->dio_end, or we could
use the "do not account" flag to avoid it.
- However, page cache invalidation and dsync work needs to be done
before in-flight dio release, so this we likely need to move this
stuff to ->end_io before we drop the IOTYPE lock...
- probably can be handled with appropriate helpers...
I've implemented some of this already; I'm currently in the process
of making truncate exclusion work correctly. Once that works, I'll
post the code....
-~dave
--
Dave Chinner
david@fromorbit.com
On 2025/4/30 10:05, Dave Chinner wrote:
> On Fri, Apr 25, 2025 at 06:38:39PM +0800, Chi Zhiling wrote:
>> From: Chi Zhiling <chizhiling@kylinos.cn>
>>
>> This is a patch attempting to implement concurrent buffered writes.
>> The main idea is to use the folio lock to ensure the atomicity of the
>> write when writing to a single folio, instead of using the i_rwsem.
>>
>> I tried the "folio batch" solution, which is a great idea, but during
>> testing, I encountered an OOM issue because the locked folios couldn't
>> be reclaimed.
>>
>> So for now, I can only allow concurrent writes within a single block.
>> The good news is that since we already support BS > PS, we can use a
>> larger block size to enable higher granularity concurrency.
>
> I'm not going to say no to this, but I think it's a short term and
> niche solution to the general problem of enabling shared buffered
> writes. i.e. I expect that it will not exist for long, whilst
Hi, Dave,
Yes, it's a short-term solution, but it's enough for some scenarios.
I would also like to see better idea.
> experience tells me that adding special cases to the IO path locking
> has a fairly high risk of unexpected regressions and/or data
> corruption....
I can't say there is definitely no data corruption, but I haven't seen
any new errors in xfstests.
We might need to add some assertions in the code to check for the risk
of data corruption, not specifically for this patch, but for the current
XFS system in general. This would help developers avoid introducing new
bugs, similar to the lockdep tool.
>
>> These ideas come from previous discussions:
>> https://lore.kernel.org/all/953b0499-5832-49dc-8580-436cf625db8c@163.com/
>
> In my spare time I've been looking at using the two state lock from
> bcachefs for this because it looks to provide a general solution to
> the issue of concurrent buffered writes.
In fact, I have tried the two state lock, and it does work quite well.
However, I noticed some performance degradation in single-threaded
scenarios in UnixBench (I'm not sure if it's caused by the memory
barrier).
Since single-threaded bufferedio is still the primary read-write mode,
I don't want to introduce too much impact in single-threaded scenarios.
That's why I introduced the i_direct_mode flag, which protected by
i_rwsem. This approach only adds a boolean operation in fast path.
>
> The two valid IO exclusion states are:
>
> +enum {
> + XFS_IOTYPE_BUFFERED = 0,
> + XFS_IOTYPE_DIRECT = 1,
> +};
>
> Importantly, this gives us three states, not two:
>
> 1. Buffered IO in progress,
> 2. Direct IO in progress, and
> 3. No IO in progress. (i.e. not held at all)
>
> When we do operations like truncate or hole punch, we need the state
> to be #3 - no IO in progress.
>
> Hence we can use this like we currently use i_dio_count for
> truncate with the correct lock ordering. That is, we order the
> IOLOCK before the IOTYPE lock:
>
> Buffered IO:
>
> IOLOCK_SHARED, IOLOCK_EXCL if IREMAPPING
> <IREMAPPING excluded>
> IOTYPE_BUFFERED
> <block waiting for in progress DIO>
> <do buffered IO>
> unlock IOTYPE_BUFFERED
> unlock IOLOCK
>
> IREMAPPING IO:
>
> IOLOCK_EXCL
> set IREMAPPING
> demote to IOLOCK_SHARED
> IOTYPE_BUFFERED
> <block waiting for in progress DIO>
> <do reflink operation>
> unlock IOTYPE_BUFFERED
> clear IREMAPPING
> unlock IOLOCK
>
> Direct IO:
>
> IOLOCK_SHARED
> IOTYPE_DIRECT
> <block waiting for in progress buffered, IREMAPPING>
> <do direct IO>
> <submission>
> unlock IOLOCK_SHARED
> <completion>
> unlock IOTYPE_DIRECT
>
> Notes on DIO write file extension w.r.t. xfs_file_write_zero_eof():
> - xfs_file_write_zero_eof() does buffered IO.
> - needs to switch from XFS_IOTYPE_DIRECT to XFS_IOTYPE_BUFFERED
> - this locks out all other DIO, as the current switch to
> IOLOCK_EXCL will do.
> - DIO write path no longer needs IOLOCK_EXCL to serialise post-EOF
> block zeroing against other concurrent DIO writes.
> - future optimisation target so that it doesn't serialise against
> other DIO (reads or writes) within EOF.
>
> This path looks like:
>
> Direct IO extension:
>
> IOLOCK_EXCL
> IOTYPE_BUFFERED
> <block waiting for in progress DIO>
> xfs_file_write_zero_eof();
> demote to IOLOCK_SHARED
> IOTYPE_DIRECT
> <block waiting for buffered, IREMAPPING>
> <do direct IO>
> <submission>
> unlock IOLOCK_SHARED
> <completion>
> unlock IOTYPE_DIRECT
>
> Notes on xfs_file_dio_write_unaligned()
> - this drains all DIO in flight so it has exclusive access to the
> given block being written to. This prevents races doing IO (read
> or write, buffered or direct) to that specific block.
> - essentially does an exclusive, synchronous DIO write after
> draining all DIO in flight. Very slow, reliant on inode_dio_wait()
> existing.
> - make the slow path after failing the unaligned overwrite a
> buffered write.
> - switching modes to buffered drains all the DIO in flight,
> buffered write data all the necessary sub-block zeroing in memory,
> next overlapping DIO of fdatasync() will flush it to disk.
>
> This slow path looks like:
>
> IOLOCK_EXCL
> IOTYPE_BUFFERED
> <excludes all concurrent DIO>
> set IOCB_DONTCACHE
> iomap_file_buffered_write()
>
> Truncate and other IO exclusion code such as fallocate() need to do
> this:
>
> IOLOCK_EXCL
> <wait for IO state to become unlocked>
>
> The IOLOCK_EXCL creates a submission barrier, and the "wait for IO
> state to become unlocked" ensures that all buffered and direct IO
> have been drained and there is no IO in flight at all.
>
> Th upside of this is that we get rid of the dependency on
> inode->i_dio_count and we ensure that we don't potentially need a
> similar counter for buffered writes in future. e.g. buffered
> AIO+RWF_DONTCACHE+RWF_DSYNC could be optimised to use FUA and/or IO
> completion side DSYNC operations like AIO+DIO+RWF_DSYNC currently
> does and that would currently need in-flight IO tracking for truncate
> synchronisation. The two-state lock solution avoids that completely.
Okay, sounds great.
>
> Some work needs to be done to enable sane IO completion unlocking
> (i.e. from dio->end_io). My curent notes on this say:
>
> - ->end_io only gets called once when all bios submitted for the dio
> are complete. hence only one completion, so unlock is balanced
> - caller has no idea on error if IO was submitted and completed;
> if dio->end_io unlocks on IO error, the waiting submitter has no
> clue whether it has to unlock or not.
> - need a clean submitter unlock model. Alternatives?
> - dio->end_io only unlock on on IO error when
> dio->wait_for_completion is not set (i.e. completing an AIO,
> submitter was given -EIOCBQUEUED). iomap_dio_rw() caller can
> then do:
>
> if (ret < 0 && ret != -EIOCBQUEUED) {
> /* unlock inode */
> }
> - if end_io is checking ->wait_for_completion, only ever unlock
> if it isn't set? i.e. if there is a waiter, we leave it to them
> to unlock? Simpler rule for ->end_io, cleaner for the submitter
> to handle:
>
> if (ret != -EIOCBQUEUED) {
> /* unlock inode */
> }
> - need to move DIO write page cache invalidation and inode_dio_end()
> into ->end_io for implementations
> - if no ->end_io provided, do what the current code does.
>
> There are also a few changes need to avoid inode->i_dio_count in
> iomap:
> - need a flag to tell iomap_dio_rw() not to account the DIO
> - inode_dio_end() may need to be moved to ->dio_end, or we could
> use the "do not account" flag to avoid it.
> - However, page cache invalidation and dsync work needs to be done
> before in-flight dio release, so this we likely need to move this
> stuff to ->end_io before we drop the IOTYPE lock...
> - probably can be handled with appropriate helpers...
>
> I've implemented some of this already; I'm currently in the process
> of making truncate exclusion work correctly. Once that works, I'll
> post the code....
Thank you for sharing your thoughts, I will be waiting for that.
>
> -~dave
On Wed, Apr 30, 2025 at 05:03:51PM +0800, Chi Zhiling wrote: > On 2025/4/30 10:05, Dave Chinner wrote: > > On Fri, Apr 25, 2025 at 06:38:39PM +0800, Chi Zhiling wrote: > > > From: Chi Zhiling <chizhiling@kylinos.cn> > > > > > > This is a patch attempting to implement concurrent buffered writes. > > > The main idea is to use the folio lock to ensure the atomicity of the > > > write when writing to a single folio, instead of using the i_rwsem. > > > > > > I tried the "folio batch" solution, which is a great idea, but during > > > testing, I encountered an OOM issue because the locked folios couldn't > > > be reclaimed. > > > > > > So for now, I can only allow concurrent writes within a single block. > > > The good news is that since we already support BS > PS, we can use a > > > larger block size to enable higher granularity concurrency. > > > > I'm not going to say no to this, but I think it's a short term and > > niche solution to the general problem of enabling shared buffered > > writes. i.e. I expect that it will not exist for long, whilst > > Hi, Dave, > > Yes, it's a short-term solution, but it's enough for some scenarios. > I would also like to see better idea. > > > experience tells me that adding special cases to the IO path locking > > has a fairly high risk of unexpected regressions and/or data > > corruption.... > > I can't say there is definitely no data corruption, but I haven't seen > any new errors in xfstests. Yeah, that's why they are "unexpected regressions" - testing looks fine, but once it gets out into complex production workloads.... > We might need to add some assertions in the code to check for the risk > of data corruption, not specifically for this patch, but for the current > XFS system in general. This would help developers avoid introducing new > bugs, similar to the lockdep tool. I'm not sure what you invisage here or what problems you think we might be able to catch - can you describe what you are thinking about here? > > > These ideas come from previous discussions: > > > https://lore.kernel.org/all/953b0499-5832-49dc-8580-436cf625db8c@163.com/ > > > > In my spare time I've been looking at using the two state lock from > > bcachefs for this because it looks to provide a general solution to > > the issue of concurrent buffered writes. > > In fact, I have tried the two state lock, and it does work quite well. > However, I noticed some performance degradation in single-threaded > scenarios in UnixBench (I'm not sure if it's caused by the memory > barrier). Please share the patch - I'd like to see how you implemented it and how you solved the various lock ordering and wider IO serialisation issues. It may be that I've overlooked something and your implementation makes me aware of it. OTOH, I might see something in your patch that could be improved that mitigates the regression. > Since single-threaded bufferedio is still the primary read-write mode, > I don't want to introduce too much impact in single-threaded scenarios. I mostly don't care that much about small single threaded performance regressions anywhere in XFS if there is some upside for scalability or performance. We've always traded off single threaded performance for better concurrency and/or scalability in XFS (right from the initial design choices way back in the early 1990s), so I don't see why we'd treat a significant improvement in buffered IO concurrency any differently. -Dave. -- Dave Chinner david@fromorbit.com
On 2025/5/1 07:45, Dave Chinner wrote:
> On Wed, Apr 30, 2025 at 05:03:51PM +0800, Chi Zhiling wrote:
>> On 2025/4/30 10:05, Dave Chinner wrote:
>>> On Fri, Apr 25, 2025 at 06:38:39PM +0800, Chi Zhiling wrote:
>>>> From: Chi Zhiling <chizhiling@kylinos.cn>
>>>>
>>>> This is a patch attempting to implement concurrent buffered writes.
>>>> The main idea is to use the folio lock to ensure the atomicity of the
>>>> write when writing to a single folio, instead of using the i_rwsem.
>>>>
>>>> I tried the "folio batch" solution, which is a great idea, but during
>>>> testing, I encountered an OOM issue because the locked folios couldn't
>>>> be reclaimed.
>>>>
>>>> So for now, I can only allow concurrent writes within a single block.
>>>> The good news is that since we already support BS > PS, we can use a
>>>> larger block size to enable higher granularity concurrency.
>>>
>>> I'm not going to say no to this, but I think it's a short term and
>>> niche solution to the general problem of enabling shared buffered
>>> writes. i.e. I expect that it will not exist for long, whilst
>>
>> Hi, Dave,
>>
>> Yes, it's a short-term solution, but it's enough for some scenarios.
>> I would also like to see better idea.
>>
>>> experience tells me that adding special cases to the IO path locking
>>> has a fairly high risk of unexpected regressions and/or data
>>> corruption....
>>
>> I can't say there is definitely no data corruption, but I haven't seen
>> any new errors in xfstests.
>
> Yeah, that's why they are "unexpected regressions" - testing looks
> fine, but once it gets out into complex production workloads....
>
>> We might need to add some assertions in the code to check for the risk
>> of data corruption, not specifically for this patch, but for the current
>> XFS system in general. This would help developers avoid introducing new
>> bugs, similar to the lockdep tool.
>
> I'm not sure what you invisage here or what problems you think we
> might be able to catch - can you describe what you are thinking
> about here?
I'm just say it casually.
I mean, is there a way to check for data corruption risks, rather than
waiting for it to happen and then reporting an error? Just like how
lockdep detects deadlock risks in advance.
I guess not.
>
>>>> These ideas come from previous discussions:
>>>> https://lore.kernel.org/all/953b0499-5832-49dc-8580-436cf625db8c@163.com/
>>>
>>> In my spare time I've been looking at using the two state lock from
>>> bcachefs for this because it looks to provide a general solution to
>>> the issue of concurrent buffered writes.
>>
>> In fact, I have tried the two state lock, and it does work quite well.
>> However, I noticed some performance degradation in single-threaded
>> scenarios in UnixBench (I'm not sure if it's caused by the memory
>> barrier).
>
> Please share the patch - I'd like to see how you implemented it and
> how you solved the various lock ordering and wider IO serialisation
> issues. It may be that I've overlooked something and your
> implementation makes me aware of it. OTOH, I might see something in
> your patch that could be improved that mitigates the regression.
I think I haven't solved these problems.
The lock order I envisioned is IOLOCK -> TWO_STATE_LOCK -> MMAP_LOCK ->
ILOCK, which means that when releasing IOLOCK, TWO_STATE_LOCK should
also be released first, including when upgrading IOLOCK_SHARED to
IOLOCK_EXCL. However, I didn't do this.
I missed this part, and although I didn't encounter any issues in the
xfstests, this could indeed lead to a deadlock.
Besides this, is there anything else I have missed?
The patch is as follows, though it's not helpful
diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c
index 3e7448c2a969..573e31bfef3f 100644
--- a/fs/xfs/xfs_file.c
+++ b/fs/xfs/xfs_file.c
@@ -36,6 +36,17 @@
static const struct vm_operations_struct xfs_file_vm_ops;
+#define TWO_STATE_LOCK(ip, state) \
+ xfs_two_state_lock(&ip->i_write_lock, state)
+
+#define TWO_STATE_UNLOCK(ip, state) \
+ xfs_two_state_unlock(&ip->i_write_lock, state)
+
+#define buffered_lock(inode) TWO_STATE_LOCK(inode, 0)
+#define buffered_unlock(inode) TWO_STATE_UNLOCK(inode, 0)
+#define direct_lock(inode) TWO_STATE_LOCK(inode, 1)
+#define direct_unlock(inode) TWO_STATE_UNLOCK(inode, 1)
+
/*
* Decide if the given file range is aligned to the size of the
fundamental
* allocation unit for the file.
@@ -263,7 +274,10 @@ xfs_file_dio_read(
ret = xfs_ilock_iocb(iocb, XFS_IOLOCK_SHARED);
if (ret)
return ret;
+ direct_lock(ip);
ret = iomap_dio_rw(iocb, to, &xfs_read_iomap_ops, NULL, 0, NULL, 0);
+ direct_unlock(ip);
+
xfs_iunlock(ip, XFS_IOLOCK_SHARED);
return ret;
@@ -598,9 +612,13 @@ xfs_file_dio_write_aligned(
xfs_ilock_demote(ip, XFS_IOLOCK_EXCL);
iolock = XFS_IOLOCK_SHARED;
}
+
+ direct_lock(ip);
trace_xfs_file_direct_write(iocb, from);
ret = iomap_dio_rw(iocb, from, &xfs_direct_write_iomap_ops,
&xfs_dio_write_ops, 0, NULL, 0);
+ direct_unlock(ip);
+
out_unlock:
if (iolock)
xfs_iunlock(ip, iolock);
@@ -676,9 +694,11 @@ xfs_file_dio_write_unaligned(
if (flags & IOMAP_DIO_FORCE_WAIT)
inode_dio_wait(VFS_I(ip));
+ direct_lock(ip);
trace_xfs_file_direct_write(iocb, from);
ret = iomap_dio_rw(iocb, from, &xfs_direct_write_iomap_ops,
&xfs_dio_write_ops, flags, NULL, 0);
+ direct_unlock(ip);
/*
* Retry unaligned I/O with exclusive blocking semantics if the DIO
@@ -776,9 +796,11 @@ xfs_file_buffered_write(
if (ret)
goto out;
+ buffered_lock(ip);
trace_xfs_file_buffered_write(iocb, from);
ret = iomap_file_buffered_write(iocb, from,
&xfs_buffered_write_iomap_ops);
+ buffered_unlock(ip);
/*
* If we hit a space limit, try to free up some lingering preallocated
diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c
index 52210a54fe7e..a8bc8d9737c4 100644
--- a/fs/xfs/xfs_icache.c
+++ b/fs/xfs/xfs_icache.c
@@ -114,6 +114,7 @@ xfs_inode_alloc(
spin_lock_init(&ip->i_ioend_lock);
ip->i_next_unlinked = NULLAGINO;
ip->i_prev_unlinked = 0;
+ two_state_lock_init(&ip->i_write_lock);
return ip;
}
diff --git a/fs/xfs/xfs_inode.h b/fs/xfs/xfs_inode.h
index b91aaa23ea1e..9a8c75feda16 100644
--- a/fs/xfs/xfs_inode.h
+++ b/fs/xfs/xfs_inode.h
@@ -8,6 +8,7 @@
#include "xfs_inode_buf.h"
#include "xfs_inode_fork.h"
+#include "xfs_lock.h"
/*
* Kernel only inode definitions
@@ -92,6 +93,8 @@ typedef struct xfs_inode {
spinlock_t i_ioend_lock;
struct work_struct i_ioend_work;
struct list_head i_ioend_list;
+
+ two_state_lock_t i_write_lock;
} xfs_inode_t;
static inline bool xfs_inode_on_unlinked_list(const struct xfs_inode *ip)
Thanks
>
>> Since single-threaded bufferedio is still the primary read-write mode,
>> I don't want to introduce too much impact in single-threaded scenarios.
>
> I mostly don't care that much about small single threaded
> performance regressions anywhere in XFS if there is some upside for
> scalability or performance. We've always traded off single threaded
> performance for better concurrency and/or scalability in XFS (right
> from the initial design choices way back in the early 1990s), so I
> don't see why we'd treat a significant improvement in buffered IO
> concurrency any differently.
>
> -Dave.
© 2016 - 2026 Red Hat, Inc.