summaryrefslogtreecommitdiff
path: root/src/arch/riscv/id.S
diff options
context:
space:
mode:
authorJulius Werner <jwerner@chromium.org>2018-06-25 18:06:59 -0700
committerJulius Werner <jwerner@chromium.org>2018-06-26 23:59:18 +0000
commite0058e89ab94ef7ef366c596d3e03ad693694d2a (patch)
tree9d30123effa4f26419fcf2a12ab909e5d0ebf881 /src/arch/riscv/id.S
parenta98b5bf89b39fede95c34bf81e9decc1e6b6d38f (diff)
arm64: Reimplement mmu_disable() in assembly
Disabling the MMU with proper cache behavior is a bit tricky on ARM64: you can flush the cache first and then disable the MMU (like we have been doing), but then you run the risk of having new cache lines allocated in the tiny window between the two, which may or may not become a problem when those get flushed at a later point (on some platforms certain memory regions "go away" at certain points in a way that makes the CPU very unhappy if it ever issues a write cycle to them again afterwards). The obvious alternative is to first disable the MMU and then flush the cache, ensuring that every memory access after the flush already has the non-cacheable attribute. But we can't just flip the order around in the C code that we have because then those accesses in the tiny window in-between will go straight to memory, so loads may yield the wrong result or stores may get overwritten again by the later cache flush. In the end, this all shouldn't really be a problem because we can do both operations purely from registers without doing any explicit memory accesses in-between. We just have to reimplement the function in assembly to make sure the compiler doesn't insert any stack accesses at the wrong points. Change-Id: Ic552960c91400dadae6f130b2521a696eeb4c0b1 Signed-off-by: Julius Werner <jwerner@chromium.org> Reviewed-on: https://review.coreboot.org/27238 Tested-by: build bot (Jenkins) <no-reply@coreboot.org> Reviewed-by: Aaron Durbin <adurbin@chromium.org>
Diffstat (limited to 'src/arch/riscv/id.S')
0 files changed, 0 insertions, 0 deletions