Commit bdcd3eab authored by Xiaoguang Wang's avatar Xiaoguang Wang Committed by Jens Axboe
Browse files

io_uring: fix poll_list race for SETUP_IOPOLL|SETUP_SQPOLL



After making ext4 support iopoll method:
  let ext4_file_operations's iopoll method be iomap_dio_iopoll(),
we found fio can easily hang in fio_ioring_getevents() with below fio
job:
    rm -f testfile; sync;
    sudo fio -name=fiotest -filename=testfile -iodepth=128 -thread
-rw=write -ioengine=io_uring  -hipri=1 -sqthread_poll=1 -direct=1
-bs=4k -size=10G -numjobs=8 -runtime=2000 -group_reporting
with IORING_SETUP_SQPOLL and IORING_SETUP_IOPOLL enabled.

There are two issues that results in this hang, one reason is that
when IORING_SETUP_SQPOLL and IORING_SETUP_IOPOLL are enabled, fio
does not use io_uring_enter to get completed events, it relies on
kernel io_sq_thread to poll for completed events.

Another reason is that there is a race: when io_submit_sqes() in
io_sq_thread() submits a batch of sqes, variable 'inflight' will
record the number of submitted reqs, then io_sq_thread will poll for
reqs which have been added to poll_list. But note, if some previous
reqs have been punted to io worker, these reqs will won't be in
poll_list timely. io_sq_thread() will only poll for a part of previous
submitted reqs, and then find poll_list is empty, reset variable
'inflight' to be zero. If app just waits these deferred reqs and does
not wake up io_sq_thread again, then hang happens.

For app that entirely relies on io_sq_thread to poll completed requests,
let io_iopoll_req_issued() wake up io_sq_thread properly when adding new
element to poll_list, and when io_sq_thread prepares to sleep, check
whether poll_list is empty again, if not empty, continue to poll.

Signed-off-by: default avatarXiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
parent 41726c9a
Loading
Loading
Loading
Loading
+27 −32
Original line number Diff line number Diff line
@@ -1821,6 +1821,10 @@ static void io_iopoll_req_issued(struct io_kiocb *req)
		list_add(&req->list, &ctx->poll_list);
	else
		list_add_tail(&req->list, &ctx->poll_list);

	if ((ctx->flags & IORING_SETUP_SQPOLL) &&
	    wq_has_sleeper(&ctx->sqo_wait))
		wake_up(&ctx->sqo_wait);
}

static void io_file_put(struct io_submit_state *state)
@@ -5086,9 +5090,8 @@ static int io_sq_thread(void *data)
	const struct cred *old_cred;
	mm_segment_t old_fs;
	DEFINE_WAIT(wait);
	unsigned inflight;
	unsigned long timeout;
	int ret;
	int ret = 0;

	complete(&ctx->completions[1]);

@@ -5096,39 +5099,19 @@ static int io_sq_thread(void *data)
	set_fs(USER_DS);
	old_cred = override_creds(ctx->creds);

	ret = timeout = inflight = 0;
	timeout = jiffies + ctx->sq_thread_idle;
	while (!kthread_should_park()) {
		unsigned int to_submit;

		if (inflight) {
		if (!list_empty(&ctx->poll_list)) {
			unsigned nr_events = 0;

			if (ctx->flags & IORING_SETUP_IOPOLL) {
				/*
				 * inflight is the count of the maximum possible
				 * entries we submitted, but it can be smaller
				 * if we dropped some of them. If we don't have
				 * poll entries available, then we know that we
				 * have nothing left to poll for. Reset the
				 * inflight count to zero in that case.
				 */
			mutex_lock(&ctx->uring_lock);
			if (!list_empty(&ctx->poll_list))
				io_iopoll_getevents(ctx, &nr_events, 0);
			else
					inflight = 0;
				mutex_unlock(&ctx->uring_lock);
			} else {
				/*
				 * Normal IO, just pretend everything completed.
				 * We don't have to poll completions for that.
				 */
				nr_events = inflight;
			}

			inflight -= nr_events;
			if (!inflight)
				timeout = jiffies + ctx->sq_thread_idle;
			mutex_unlock(&ctx->uring_lock);
		}

		to_submit = io_sqring_entries(ctx);
@@ -5157,7 +5140,7 @@ static int io_sq_thread(void *data)
			 * more IO, we should wait for the application to
			 * reap events and wake us up.
			 */
			if (inflight ||
			if (!list_empty(&ctx->poll_list) ||
			    (!time_after(jiffies, timeout) && ret != -EBUSY &&
			    !percpu_ref_is_dying(&ctx->refs))) {
				cond_resched();
@@ -5167,6 +5150,19 @@ static int io_sq_thread(void *data)
			prepare_to_wait(&ctx->sqo_wait, &wait,
						TASK_INTERRUPTIBLE);

			/*
			 * While doing polled IO, before going to sleep, we need
			 * to check if there are new reqs added to poll_list, it
			 * is because reqs may have been punted to io worker and
			 * will be added to poll_list later, hence check the
			 * poll_list again.
			 */
			if ((ctx->flags & IORING_SETUP_IOPOLL) &&
			    !list_empty_careful(&ctx->poll_list)) {
				finish_wait(&ctx->sqo_wait, &wait);
				continue;
			}

			/* Tell userspace we may need a wakeup call */
			ctx->rings->sq_flags |= IORING_SQ_NEED_WAKEUP;
			/* make sure to read SQ tail after writing flags */
@@ -5194,8 +5190,7 @@ static int io_sq_thread(void *data)
		mutex_lock(&ctx->uring_lock);
		ret = io_submit_sqes(ctx, to_submit, NULL, -1, &cur_mm, true);
		mutex_unlock(&ctx->uring_lock);
		if (ret > 0)
			inflight += ret;
		timeout = jiffies + ctx->sq_thread_idle;
	}

	set_fs(old_fs);