Commit 2cdb54c9 authored by Mauro Carvalho Chehab's avatar Mauro Carvalho Chehab Committed by Paul E. McKenney
Browse files

docs: RCU: Convert rculist_nulls.txt to ReST



- Add a SPDX header;
- Adjust document title;
- Some whitespace fixes and new line breaks;
- Mark literal blocks as such;
- Add it to RCU/index.rst.

Signed-off-by: default avatarMauro Carvalho Chehab <mchehab+huawei@kernel.org>
Signed-off-by: default avatarPaul E. McKenney <paulmck@kernel.org>
parent 058cc23b
Loading
Loading
Loading
Loading
+1 −0
Original line number Diff line number Diff line
@@ -17,6 +17,7 @@ RCU concepts
   rcu_dereference
   whatisRCU
   rcu
   rculist_nulls
   listRCU
   NMI-RCU
   UP
+194 −0
Original line number Diff line number Diff line
Using hlist_nulls to protect read-mostly linked lists and
.. SPDX-License-Identifier: GPL-2.0

=================================================
Using RCU hlist_nulls to protect list and objects
=================================================

This section describes how to use hlist_nulls to
protect read-mostly linked lists and
objects using SLAB_TYPESAFE_BY_RCU allocations.

Please read the basics in Documentation/RCU/listRCU.rst
@@ -12,6 +19,9 @@ use following algos :

1) Lookup algo
--------------

::

  rcu_read_lock()
  begin:
  obj = lockless_lookup(key);
@@ -33,6 +43,8 @@ rcu_read_unlock();
Beware that lockless_lookup(key) cannot use traditional hlist_for_each_entry_rcu()
but a version with an additional memory barrier (smp_rmb())

::

  lockless_lookup(key)
  {
    struct hlist_node *node, *next;
@@ -43,8 +55,9 @@ lockless_lookup(key)
      if (obj->key == key)
        return obj;
    return NULL;
  }

And note the traditional hlist_for_each_entry_rcu() misses this smp_rmb() :
And note the traditional hlist_for_each_entry_rcu() misses this smp_rmb()::

  struct hlist_node *node;
  for (pos = rcu_dereference((head)->first);
@@ -54,9 +67,8 @@ And note the traditional hlist_for_each_entry_rcu() misses this smp_rmb() :
   if (obj->key == key)
     return obj;
  return NULL;
}

Quoting Corey Minyard :
Quoting Corey Minyard::

  "If the object is moved from one list to another list in-between the
  time the hash is calculated and the next field is accessed, and the
@@ -67,8 +79,8 @@ Quoting Corey Minyard :
  solved by pre-fetching the "next" field (with proper barriers) before
  checking the key."

2) Insert algo :
----------------
2) Insert algo
--------------

We need to make sure a reader cannot read the new 'obj->obj_next' value
and previous value of 'obj->key'. Or else, an item could be deleted
@@ -76,6 +88,8 @@ from a chain, and inserted into another chain. If new chain was empty
before the move, 'next' pointer is NULL, and lockless reader can
not detect it missed following items in original chain.

::

  /*
  * Please note that new inserts are done at the head of list,
  * not in the middle or end.
@@ -99,6 +113,8 @@ Nothing special here, we can use a standard RCU hlist deletion.
But thanks to SLAB_TYPESAFE_BY_RCU, beware a deleted object can be reused
very very fast (before the end of RCU grace period)

::

  if (put_last_reference_on(obj) {
    lock_chain(); // typically a spin_lock()
    hlist_del_init_rcu(&obj->obj_node);
@@ -109,6 +125,7 @@ if (put_last_reference_on(obj) {


--------------------------------------------------------------------------

With hlist_nulls we can avoid extra smp_rmb() in lockless_lookup()
and extra smp_wmb() in insert function.

@@ -124,6 +141,9 @@ scan the list again without harm.


1) lookup algo
--------------

::

  head = &table[slot];
  rcu_read_lock();
@@ -150,8 +170,10 @@ begin:
  out:
  rcu_read_unlock();

2) Insert function :
--------------------
2) Insert function
------------------

::

  /*
  * Please note that new inserts are done at the head of list,
+1 −1
Original line number Diff line number Diff line
@@ -162,7 +162,7 @@ static inline void hlist_nulls_add_fake(struct hlist_nulls_node *n)
 * The barrier() is needed to make sure compiler doesn't cache first element [1],
 * as this loop can be restarted [2]
 * [1] Documentation/core-api/atomic_ops.rst around line 114
 * [2] Documentation/RCU/rculist_nulls.txt around line 146
 * [2] Documentation/RCU/rculist_nulls.rst around line 146
 */
#define hlist_nulls_for_each_entry_rcu(tpos, pos, head, member)			\
	for (({barrier();}),							\
+2 −2
Original line number Diff line number Diff line
@@ -1973,7 +1973,7 @@ struct sock *sk_clone_lock(const struct sock *sk, const gfp_t priority)

		/*
		 * Before updating sk_refcnt, we must commit prior changes to memory
		 * (Documentation/RCU/rculist_nulls.txt for details)
		 * (Documentation/RCU/rculist_nulls.rst for details)
		 */
		smp_wmb();
		refcount_set(&newsk->sk_refcnt, 2);
@@ -3035,7 +3035,7 @@ void sock_init_data(struct socket *sock, struct sock *sk)
	sk_rx_queue_clear(sk);
	/*
	 * Before updating sk_refcnt, we must commit prior changes to memory
	 * (Documentation/RCU/rculist_nulls.txt for details)
	 * (Documentation/RCU/rculist_nulls.rst for details)
	 */
	smp_wmb();
	refcount_set(&sk->sk_refcnt, 1);