+31
−26
+22
−0
Loading
Gitlab 现已全面支持 git over ssh 与 git over https。通过 HTTPS 访问请配置带有 read_repository / write_repository 权限的 Personal access token。通过 SSH 端口访问请使用 22 端口或 13389 端口。如果使用CAS注册了账户但不知道密码,可以自行至设置中更改;如有其他问题,请发邮件至 service@cra.moe 寻求协助。
The change to flush_kernel_vmap_range() wasn't sufficient to avoid the SMP stalls. The problem is some drivers call these routines with interrupts disabled. Interrupts need to be enabled for flush_tlb_all() and flush_cache_all() to work. This version adds checks to ensure interrupts are not disabled before calling routines that need IPI interrupts. When interrupts are disabled, we now drop into slower code. The attached change fixes the ordering of cache and TLB flushes in several cases. When we flush the cache using the existing PTE/TLB entries, we need to flush the TLB after doing the cache flush. We don't need to do this when we flush the entire instruction and data caches as these flushes don't use the existing TLB entries. The same is true for tmpalias region flushes. The flush_kernel_vmap_range() and invalidate_kernel_vmap_range() routines have been updated. Secondly, we added a new purge_kernel_dcache_range_asm() routine to pacache.S and use it in invalidate_kernel_vmap_range(). Nominally, purges are faster than flushes as the cache lines don't have to be written back to memory. Hopefully, this is sufficient to resolve the remaining problems due to cache speculation. So far, testing indicates that this is the case. I did work up a patch using tmpalias flushes, but there is a performance hit because we need the physical address for each page, and we also need to sequence access to the tmpalias flush code. This increases the probability of stalls. Signed-off-by:John David <Anglin <dave.anglin@bell.net> Cc: stable@vger.kernel.org # 4.9+ Signed-off-by:
Helge Deller <deller@gmx.de>
CRA Git | Maintained and supported by SUSTech CRA and CCSE