+1
−1
Loading
Gitlab 现已全面支持 git over ssh 与 git over https。通过 HTTPS 访问请配置带有 read_repository / write_repository 权限的 Personal access token。通过 SSH 端口访问请使用 22 端口或 13389 端口。如果使用CAS注册了账户但不知道密码,可以自行至设置中更改;如有其他问题,请发邮件至 service@cra.moe 寻求协助。
CPUs without single-byte and double-byte loads and stores place some
"interesting" requirements on concurrent code. For example (adapted
from Peter Hurley's test code), suppose we have the following structure:
struct foo {
spinlock_t lock1;
spinlock_t lock2;
char a; /* Protected by lock1. */
char b; /* Protected by lock2. */
};
struct foo *foop;
Of course, it is common (and good) practice to place data protected
by different locks in separate cache lines. However, if the locks are
rarely acquired (for example, only in rare error cases), and there are
a great many instances of the data structure, then memory footprint can
trump false-sharing concerns, so that it can be better to place them in
the same cache cache line as above.
But if the CPU does not support single-byte loads and stores, a store
to foop->a will do a non-atomic read-modify-write operation on foop->b,
which will come as a nasty surprise to someone holding foop->lock2. So we
now require CPUs to support single-byte and double-byte loads and stores.
Therefore, this commit adjusts the definition of __native_word() to allow
these sizes to be used by smp_load_acquire() and smp_store_release().
Signed-off-by:
Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
CRA Git | Maintained and supported by SUSTech CRA and CCSE