block/blk-rq-qos.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-)
The memory barriers in list_del_init_careful() and list_empty_careful()
in pairs already handle the proper ordering between data.got_token
and data.wq.entry. So remove the redundant explicit barriers. And also
change a "break" statement to "return" to avoid redundant calling of
finish_wait().
Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
block/blk-rq-qos.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/block/blk-rq-qos.c b/block/blk-rq-qos.c
index dc510f493ba57..9b0aa7dd6779f 100644
--- a/block/blk-rq-qos.c
+++ b/block/blk-rq-qos.c
@@ -218,7 +218,6 @@ static int rq_qos_wake_function(struct wait_queue_entry *curr,
return -1;
data->got_token = true;
- smp_wmb();
wake_up_process(data->task);
list_del_init_careful(&curr->entry);
return 1;
@@ -274,10 +273,9 @@ void rq_qos_wait(struct rq_wait *rqw, void *private_data,
* which means we now have two. Put our local token
* and wake anyone else potentially waiting for one.
*/
- smp_rmb();
if (data.got_token)
cleanup_cb(rqw, private_data);
- break;
+ return;
}
io_schedule();
has_sleeper = true;
--
2.20.1
On Mon, 21 Oct 2024 16:52:51 +0800, Muchun Song wrote: > The memory barriers in list_del_init_careful() and list_empty_careful() > in pairs already handle the proper ordering between data.got_token > and data.wq.entry. So remove the redundant explicit barriers. And also > change a "break" statement to "return" to avoid redundant calling of > finish_wait(). > > > [...] Applied, thanks! [1/1] block: remove redundant explicit memory barrier from rq_qos waiter and waker commit: 904ebd2527c507752f5ddb358f887d2e0dab96a0 Best regards, -- Jens Axboe
On 2024/10/21 16:52, Muchun Song wrote: > The memory barriers in list_del_init_careful() and list_empty_careful() > in pairs already handle the proper ordering between data.got_token > and data.wq.entry. So remove the redundant explicit barriers. And also > change a "break" statement to "return" to avoid redundant calling of > finish_wait(). > > Signed-off-by: Muchun Song <songmuchun@bytedance.com> Good catch! Just a small nit below, feel free to add: Reviewed-by: Chengming Zhou <chengming.zhou@linux.dev> > --- > block/blk-rq-qos.c | 4 +--- > 1 file changed, 1 insertion(+), 3 deletions(-) > > diff --git a/block/blk-rq-qos.c b/block/blk-rq-qos.c > index dc510f493ba57..9b0aa7dd6779f 100644 > --- a/block/blk-rq-qos.c > +++ b/block/blk-rq-qos.c > @@ -218,7 +218,6 @@ static int rq_qos_wake_function(struct wait_queue_entry *curr, > return -1; > > data->got_token = true; > - smp_wmb(); > wake_up_process(data->task); > list_del_init_careful(&curr->entry); > return 1; > @@ -274,10 +273,9 @@ void rq_qos_wait(struct rq_wait *rqw, void *private_data, > * which means we now have two. Put our local token > * and wake anyone else potentially waiting for one. > */ > - smp_rmb(); > if (data.got_token) > cleanup_cb(rqw, private_data); > - break; > + return; > } Would it be better to move this acquire_inflight_cb() above out of the do-while(1) since we rely on the waker to get inflight counter for us? Thanks. > io_schedule(); > has_sleeper = true;
> On Oct 22, 2024, at 15:53, Chengming Zhou <chengming.zhou@linux.dev> wrote: > > On 2024/10/21 16:52, Muchun Song wrote: >> The memory barriers in list_del_init_careful() and list_empty_careful() >> in pairs already handle the proper ordering between data.got_token >> and data.wq.entry. So remove the redundant explicit barriers. And also >> change a "break" statement to "return" to avoid redundant calling of >> finish_wait(). >> Signed-off-by: Muchun Song <songmuchun@bytedance.com> > > Good catch! Just a small nit below, feel free to add: > > Reviewed-by: Chengming Zhou <chengming.zhou@linux.dev> > >> --- >> block/blk-rq-qos.c | 4 +--- >> 1 file changed, 1 insertion(+), 3 deletions(-) >> diff --git a/block/blk-rq-qos.c b/block/blk-rq-qos.c >> index dc510f493ba57..9b0aa7dd6779f 100644 >> --- a/block/blk-rq-qos.c >> +++ b/block/blk-rq-qos.c >> @@ -218,7 +218,6 @@ static int rq_qos_wake_function(struct wait_queue_entry *curr, >> return -1; >> data->got_token = true; >> - smp_wmb(); >> wake_up_process(data->task); >> list_del_init_careful(&curr->entry); >> return 1; >> @@ -274,10 +273,9 @@ void rq_qos_wait(struct rq_wait *rqw, void *private_data, >> * which means we now have two. Put our local token >> * and wake anyone else potentially waiting for one. >> */ >> - smp_rmb(); >> if (data.got_token) >> cleanup_cb(rqw, private_data); >> - break; >> + return; >> } > > Would it be better to move this acquire_inflight_cb() above out of > the do-while(1) since we rely on the waker to get inflight counter > for us? I also noticed about this and I am working on this. Will send a separate patch for this refactoring later. Thanks. > > Thanks. > >> io_schedule(); >> has_sleeper = true;
On 10/21/24 2:52 AM, Muchun Song wrote: > The memory barriers in list_del_init_careful() and list_empty_careful() > in pairs already handle the proper ordering between data.got_token > and data.wq.entry. So remove the redundant explicit barriers. And also > change a "break" statement to "return" to avoid redundant calling of > finish_wait(). Not sure why you didn't CC Omar on this one, as he literally just last week fixed an issue related to this. > > Signed-off-by: Muchun Song <songmuchun@bytedance.com> > --- > block/blk-rq-qos.c | 4 +--- > 1 file changed, 1 insertion(+), 3 deletions(-) > > diff --git a/block/blk-rq-qos.c b/block/blk-rq-qos.c > index dc510f493ba57..9b0aa7dd6779f 100644 > --- a/block/blk-rq-qos.c > +++ b/block/blk-rq-qos.c > @@ -218,7 +218,6 @@ static int rq_qos_wake_function(struct wait_queue_entry *curr, > return -1; > > data->got_token = true; > - smp_wmb(); > wake_up_process(data->task); > list_del_init_careful(&curr->entry); > return 1; > @@ -274,10 +273,9 @@ void rq_qos_wait(struct rq_wait *rqw, void *private_data, > * which means we now have two. Put our local token > * and wake anyone else potentially waiting for one. > */ > - smp_rmb(); > if (data.got_token) > cleanup_cb(rqw, private_data); > - break; > + return; > } > io_schedule(); > has_sleeper = true; -- Jens Axboe
> On Oct 21, 2024, at 21:45, Jens Axboe <axboe@kernel.dk> wrote: > > On 10/21/24 2:52 AM, Muchun Song wrote: >> The memory barriers in list_del_init_careful() and list_empty_careful() >> in pairs already handle the proper ordering between data.got_token >> and data.wq.entry. So remove the redundant explicit barriers. And also >> change a "break" statement to "return" to avoid redundant calling of >> finish_wait(). > > Not sure why you didn't CC Omar on this one, as he literally just last > week fixed an issue related to this. Hi Jens, Yes. I only CC the author of patch of adding the barriers, I thought they should be more confident about this. Thanks for your reminder. I saw Omar's great fix. And thanks for you help me CC Omar. I think he'll be also suitable for commenting on this patch. Muchun, Thanks.
On Tue, Oct 22, 2024 at 02:31:53PM +0800, Muchun Song wrote: > > > > On Oct 21, 2024, at 21:45, Jens Axboe <axboe@kernel.dk> wrote: > > > > On 10/21/24 2:52 AM, Muchun Song wrote: > >> The memory barriers in list_del_init_careful() and list_empty_careful() > >> in pairs already handle the proper ordering between data.got_token > >> and data.wq.entry. So remove the redundant explicit barriers. And also > >> change a "break" statement to "return" to avoid redundant calling of > >> finish_wait(). > > > > Not sure why you didn't CC Omar on this one, as he literally just last > > week fixed an issue related to this. > > Hi Jens, > > Yes. I only CC the author of patch of adding the barriers, I thought > they should be more confident about this. Thanks for your reminder. > I saw Omar's great fix. And thanks for you help me CC Omar. I think > he'll be also suitable for commenting on this patch. > > Muchun, > Thanks. Well there goes my streak of not reading memory-barriers.txt for a few months... This looks fine to me. wake_up_process() also implies a full memory barrier, so I that smp_wmb() was extra redundant. Reviewed-by: Omar Sandoval <osandov@fb.com>
© 2016 - 2024 Red Hat, Inc.