+25
−12
Loading
Gitlab 现已全面支持 git over ssh 与 git over https。通过 HTTPS 访问请配置带有 read_repository / write_repository 权限的 Personal access token。通过 SSH 端口访问请使用 22 端口或 13389 端口。如果使用CAS注册了账户但不知道密码,可以自行至设置中更改;如有其他问题,请发邮件至 service@cra.moe 寻求协助。
Thanks to the reviews and comments by Rafael, James, Mark and Andi.
Here's version 2 of the patch incorporating your comments and also some
update to my previous patch comments.
I noticed that before entering idle state, the menu idle governor will
look up the current pm_qos target value according to the list of qos
requests received. This look up currently needs the acquisition of a
lock to access the list of qos requests to find the qos target value,
slowing down the entrance into idle state due to contention by multiple
cpus to access this list. The contention is severe when there are a lot
of cpus waking and going into idle. For example, for a simple workload
that has 32 pair of processes ping ponging messages to each other, where
64 cpu cores are active in test system, I see the following profile with
37.82% of cpu cycles spent in contention of pm_qos_lock:
- 37.82% swapper [kernel.kallsyms] [k]
_raw_spin_lock_irqsave
- _raw_spin_lock_irqsave
- 95.65% pm_qos_request
menu_select
cpuidle_idle_call
- cpu_idle
99.98% start_secondary
A better approach will be to cache the updated pm_qos target value so
reading it does not require lock acquisition as in the patch below.
With this patch the contention for pm_qos_lock is removed and I saw a
2.2X increase in throughput for my message passing workload.
cc: stable@kernel.org
Signed-off-by:
Tim Chen <tim.c.chen@linux.intel.com>
Acked-by:
Andi Kleen <ak@linux.intel.com>
Acked-by:
James Bottomley <James.Bottomley@suse.de>
Acked-by:
mark gross <markgross@thegnar.org>
Signed-off-by:
Len Brown <len.brown@intel.com>
CRA Git | Maintained and supported by SUSTech CRA and CCSE