Commit 0e0ab73c authored by Sean Christopherson's avatar Sean Christopherson Committed by Paolo Bonzini
Browse files

KVM: VMX: Zero out *all* general purpose registers after VM-Exit

...except RSP, which is restored by hardware as part of VM-Exit.

Paolo theorized that restoring registers from the stack after a VM-Exit
in lieu of zeroing them could lead to speculative execution with the
guest's values, e.g. if the stack accesses miss the L1 cache[1].
Zeroing XORs are dirt cheap, so just be ultra-paranoid.

Note that the scratch register (currently RCX) used to save/restore the
guest state is also zeroed as its host-defined value is loaded via the
stack, just with a MOV instead of a POP.

[1] https://patchwork.kernel.org/patch/10771539/#22441255



Fixes: 0cb5b306 ("kvm: vmx: Scrub hardware GPRs at VM-exit")
Cc: <stable@vger.kernel.org>
Cc: Jim Mattson <jmattson@google.com>
Signed-off-by: default avatarSean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
parent 1ce072cb
Loading
Loading
Loading
Loading
+11 −3
Original line number Diff line number Diff line
@@ -6452,9 +6452,14 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx)
		"mov %%r13, %c[r13](%%" _ASM_CX ") \n\t"
		"mov %%r14, %c[r14](%%" _ASM_CX ") \n\t"
		"mov %%r15, %c[r15](%%" _ASM_CX ") \n\t"

		/*
		* Clear host registers marked as clobbered to prevent
		* speculative use.
		 * Clear all general purpose registers (except RSP, which is loaded by
		 * the CPU during VM-Exit) to prevent speculative use of the guest's
		 * values, even those that are saved/loaded via the stack.  In theory,
		 * an L1 cache miss when restoring registers could lead to speculative
		 * execution with the guest's values.  Zeroing XORs are dirt cheap,
		 * i.e. the extra paranoia is essentially free.
		 */
		"xor %%r8d,  %%r8d \n\t"
		"xor %%r9d,  %%r9d \n\t"
@@ -6470,8 +6475,11 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx)

		"xor %%eax, %%eax \n\t"
		"xor %%ebx, %%ebx \n\t"
		"xor %%ecx, %%ecx \n\t"
		"xor %%edx, %%edx \n\t"
		"xor %%esi, %%esi \n\t"
		"xor %%edi, %%edi \n\t"
		"xor %%ebp, %%ebp \n\t"
		"pop  %%" _ASM_BP "; pop  %%" _ASM_DX " \n\t"
	      : ASM_CALL_CONSTRAINT
	      : "c"(vmx), "d"((unsigned long)HOST_RSP), "S"(evmcs_rsp),