On Tue, Dec 16, 2025 at 4:10 AM Caleb Sander Mateos
<csander@purestorage.com> wrote:
>
> Use the io_ring_submit_lock() helper in io_iopoll_req_issued() instead
> of open-coding the logic. io_ring_submit_unlock() can't be used for the
> unlock, though, due to the extra logic before releasing the mutex.
>
> Signed-off-by: Caleb Sander Mateos <csander@purestorage.com>
This looks good to me.
Reviewed-by: Joanne Koong <joannelkoong@gmail.com>
> ---
> io_uring/io_uring.c | 6 ++----
> 1 file changed, 2 insertions(+), 4 deletions(-)
>
> diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
> index 6d6fe5bdebda..40582121c6a7 100644
> --- a/io_uring/io_uring.c
> +++ b/io_uring/io_uring.c
> @@ -1670,15 +1670,13 @@ void io_req_task_complete(struct io_tw_req tw_req, io_tw_token_t tw)
> * accessing the kiocb cookie.
> */
> static void io_iopoll_req_issued(struct io_kiocb *req, unsigned int issue_flags)
> {
> struct io_ring_ctx *ctx = req->ctx;
> - const bool needs_lock = issue_flags & IO_URING_F_UNLOCKED;
>
> /* workqueue context doesn't hold uring_lock, grab it now */
> - if (unlikely(needs_lock))
> - mutex_lock(&ctx->uring_lock);
> + io_ring_submit_lock(ctx, issue_flags);
>
> /*
> * Track whether we have multiple files in our lists. This will impact
> * how we do polling eventually, not spinning if we're on potentially
> * different devices.
> @@ -1701,11 +1699,11 @@ static void io_iopoll_req_issued(struct io_kiocb *req, unsigned int issue_flags)
> if (READ_ONCE(req->iopoll_completed))
> wq_list_add_head(&req->comp_list, &ctx->iopoll_list);
> else
> wq_list_add_tail(&req->comp_list, &ctx->iopoll_list);
>
> - if (unlikely(needs_lock)) {
> + if (unlikely(issue_flags & IO_URING_F_UNLOCKED)) {
> /*
> * If IORING_SETUP_SQPOLL is enabled, sqes are either handle
> * in sq thread task context or in io worker task context. If
> * current task context is sq thread, we don't need to check
> * whether should wake up sq thread.
> --
> 2.45.2
>