From e563815e059ef5881a42e6f8b37094783771d5a7 Mon Sep 17 00:00:00 2001 From: Patrick Rudolph Date: Sun, 9 Dec 2018 10:48:59 +0100 Subject: arch/x86/boot: Jump to payload in protected mode * On ARCH_RAMSTAGE_X86_64 jump to the payload in protected mode. * Add a helper function to jump to arbitrary code in protected mode, similar to the real mode call handler. * Doesn't affect existing x86_32 code. * Add a macro to cast pointer to uint32_t that dies if it would overflow on conversion Tested on QEMU Q35 using SeaBIOS as payload. Tested on Lenovo T410 with additional x86_64 patches. Change-Id: I6552ac30f1b6205e08e16d251328e01ce3fbfd14 Signed-off-by: Patrick Rudolph Reviewed-on: https://review.coreboot.org/c/coreboot/+/30118 Tested-by: build bot (Jenkins) Reviewed-by: Arthur Heymans --- Documentation/arch/x86/index.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) (limited to 'Documentation') diff --git a/Documentation/arch/x86/index.md b/Documentation/arch/x86/index.md index 81eb51925a..7b9e1fcfa0 100644 --- a/Documentation/arch/x86/index.md +++ b/Documentation/arch/x86/index.md @@ -15,6 +15,8 @@ In order to add support for x86_64 the following assumptions are made: * The high dword of pointers is always zero * The reference implementation is qemu * The CPU supports 1GiB hugepages +* x86 payloads are loaded below 4GiB in physical memory and are jumped + to in *protected mode* ## Assuptions for all stages using the reference implementation * 0-4GiB are identity mapped using 2MiB-pages as WB @@ -47,7 +49,7 @@ At the moment *$n* is 4, which results in identity mapping the lower 4 GiB. * Add assembly code for long mode - *DONE* * Add assembly code for SMM - *DONE* * Add assembly code for postcar stage - *DONE* -* Add assembly code to return to protected mode - *TODO* +* Add assembly code to return to protected mode - *DONE* * Implement reference code for mainboard `emulation/qemu-q35` - *TODO* ## Future work -- cgit v1.2.3