Commit 2cc3c4b3 authored by Linus Torvalds's avatar Linus Torvalds
Browse files

Merge tag 'io_uring-5.9-2020-08-15' of git://git.kernel.dk/linux-block

Pull io_uring fixes from Jens Axboe:
 "A few differerent things in here.

  Seems like syzbot got some more io_uring bits wired up, and we got a
  handful of reports and the associated fixes are in here.

  General fixes too, and a lot of them marked for stable.

  Lastly, a bit of fallout from the async buffered reads, where we now
  more easily trigger short reads. Some applications don't really like
  that, so the io_read() code now handles short reads internally, and
  got a cleanup along the way so that it's now easier to read (and
  documented). We're now passing tests that failed before"

* tag 'io_uring-5.9-2020-08-15' of git://git.kernel.dk/linux-block:
  io_uring: short circuit -EAGAIN for blocking read attempt
  io_uring: sanitize double poll handling
  io_uring: internally retry short reads
  io_uring: retain iov_iter state over io_read/io_write calls
  task_work: only grab task signal lock when needed
  io_uring: enable lookup of links holding inflight files
  io_uring: fail poll arm on queue proc failure
  io_uring: hold 'ctx' reference around task_work queue + execute
  fs: RWF_NOWAIT should imply IOCB_NOIO
  io_uring: defer file table grabbing request cleanup for locked requests
  io_uring: add missing REQ_F_COMP_LOCKED for nested requests
  io_uring: fix recursive completion locking on oveflow flush
  io_uring: use TWA_SIGNAL for task_work uncondtionally
  io_uring: account locked memory before potential error case
  io_uring: set ctx sq/cq entry count earlier
  io_uring: Fix NULL pointer dereference in loop_rw_iter()
  io_uring: add comments on how the async buffered read retry works
  io_uring: io_async_buf_func() need not test page bit
parents 6f6aea7e f91daf56
Loading
Loading
Loading
Loading
+386 −153

File changed.

Preview size limit exceeded, changes collapsed.

+1 −1
Original line number Diff line number Diff line
@@ -3322,7 +3322,7 @@ static inline int kiocb_set_rw_flags(struct kiocb *ki, rwf_t flags)
	if (flags & RWF_NOWAIT) {
		if (!(ki->ki_filp->f_mode & FMODE_NOWAIT))
			return -EOPNOTSUPP;
		kiocb_flags |= IOCB_NOWAIT;
		kiocb_flags |= IOCB_NOWAIT | IOCB_NOIO;
	}
	if (flags & RWF_HIPRI)
		kiocb_flags |= IOCB_HIPRI;
+15 −1
Original line number Diff line number Diff line
@@ -2541,7 +2541,21 @@ bool get_signal(struct ksignal *ksig)

relock:
	spin_lock_irq(&sighand->siglock);
	current->jobctl &= ~JOBCTL_TASK_WORK;
	/*
	 * Make sure we can safely read ->jobctl() in task_work add. As Oleg
	 * states:
	 *
	 * It pairs with mb (implied by cmpxchg) before READ_ONCE. So we
	 * roughly have
	 *
	 *	task_work_add:				get_signal:
	 *	STORE(task->task_works, new_work);	STORE(task->jobctl);
	 *	mb();					mb();
	 *	LOAD(task->jobctl);			LOAD(task->task_works);
	 *
	 * and we can rely on STORE-MB-LOAD [ in task_work_add].
	 */
	smp_store_mb(current->jobctl, current->jobctl & ~JOBCTL_TASK_WORK);
	if (unlikely(current->task_works)) {
		spin_unlock_irq(&sighand->siglock);
		task_work_run();
+7 −1
Original line number Diff line number Diff line
@@ -42,7 +42,13 @@ task_work_add(struct task_struct *task, struct callback_head *work, int notify)
		set_notify_resume(task);
		break;
	case TWA_SIGNAL:
		if (lock_task_sighand(task, &flags)) {
		/*
		 * Only grab the sighand lock if we don't already have some
		 * task_work pending. This pairs with the smp_store_mb()
		 * in get_signal(), see comment there.
		 */
		if (!(READ_ONCE(task->jobctl) & JOBCTL_TASK_WORK) &&
		    lock_task_sighand(task, &flags)) {
			task->jobctl |= JOBCTL_TASK_WORK;
			signal_wake_up(task, 0);
			unlock_task_sighand(task, &flags);