diff options
author | Stefan Reinauer <reinauer@google.com> | 2011-10-15 11:23:04 -0700 |
---|---|---|
committer | Stefan Reinauer <stefan.reinauer@coreboot.org> | 2011-10-15 21:16:37 +0200 |
commit | 3128685a918ee9c67a50f9753874b794008c8607 (patch) | |
tree | dde51d1605866e42ad19d8af16188f7aa0e45d83 /src/cpu/x86/smm | |
parent | 1377491ac7a7bb75d6834bf79a219fd8ae1c03cd (diff) |
SMM: Move wbinvd after pmode jump
According to Rudolf Marek putting a memory instruction between
the CR0 write and the jmp in protected mode switching might hang the
machine. Move it after the jmp.
There might be a better solution for this, such as enabling the cache, as
keeping it disabled does not prevent cache poisoning attacks, so there is no
real point.
However, Intel docs say that SMM code in ASEG is always running uncached, so
we might want to consider running SMM out of TSEG instead, as well.
Signed-off-by: Stefan Reinauer <reinauer@google.com>
Change-Id: Id396acf3c8a79a9f1abcc557af6e0cce099955ec
Reviewed-on: http://review.coreboot.org/283
Reviewed-by: Sven Schnelle <svens@stackframe.org>
Tested-by: build bot (Jenkins)
Diffstat (limited to 'src/cpu/x86/smm')
-rw-r--r-- | src/cpu/x86/smm/smmhandler.S | 4 |
1 files changed, 3 insertions, 1 deletions
diff --git a/src/cpu/x86/smm/smmhandler.S b/src/cpu/x86/smm/smmhandler.S index 450aa538f8..774088e1f2 100644 --- a/src/cpu/x86/smm/smmhandler.S +++ b/src/cpu/x86/smm/smmhandler.S @@ -83,13 +83,15 @@ smm_handler_start: andl $0x7FFAFFD1, %eax /* PG,AM,WP,NE,TS,EM,MP = 0 */ orl $0x60000001, %eax /* CD, NW, PE = 1 */ movl %eax, %cr0 - wbinvd /* Enable protected mode */ data32 ljmp $0x08, $1f .code32 1: + /* flush the cache after disabling it */ + wbinvd + /* Use flat data segment */ movw $0x10, %ax movw %ax, %ds |