diff options
author | Ronald G Minnich <rminnich@gmail.com> | 2024-03-06 13:59:45 -0800 |
---|---|---|
committer | ron minnich <rminnich@gmail.com> | 2024-03-14 19:33:01 +0000 |
commit | 72298ae964511f355fc9f4489da278dbaf03feea (patch) | |
tree | 22335e51088d308447dd61198780ee761f3a453b /src/soc/intel/common/block | |
parent | 091fb05312d8961f77893494d6d981e2a977710d (diff) |
arch/riscv: support physical memory protection (PMP) registers
PMP (Physical Memory Protection) is a feature of the RISC-V
Privileged Architecture spec, that allows defining region(s) of
the address space to be protected in a variety of ways: ranges
for M mode can be protected against access from lower privilege
levels, and M mode can be locked out of accessig to memory
reserved for lower privilege levels. Limits on Read, Write, and
Execute are allowed. In coreboot, we protect against Write and
Execute of PMP code from lower levels, but allow Reading, so as
to ease data structure access. PMP is not a security boundary,
it is an accident prevention device.
PMP is used here to protect persistent ramstage code that is
used to support SBI, e.g. printk and some data structures. It
also protects the SBI stacks. Note that there is one stack per
hart. There are 512- and 1024-hart SoC's being built today, so
the stack should be kept small.
PMP is not a general purpose protection mechanism and it is easy
to get around it. For example, S mode can stage a DMA that
overwrites all the M mode code. PMP is, rather, a way to avoid
simple accidents. It is understood that PMP depends on proper OS
behavior to implement true SBI security (personal conversation
with a RISC-V architect). Think of PMP as "Protection Minus
Protection".
PMP is also a very limited resource, as defined in the
architecture. This language is instructive: "PMP entries are
described by an 8-bit configuration register and one XLEN-bit
address register. Some PMP settings additionally use the address
register associated with the preceding PMP entry. Up to 16 PMP
entries are supported. If any PMP entries are implemented, then
all PMP CSRs must be implemented, but all PMP CSR fields are
WARL and may be hardwired to zero. PMP CSRs are only accessible
to M-mode."
In other words if you implement PMP even a little, you have to
impelement it all; but you can implement it in part by simply
returning 0 for a pmpcfg. Also, PMP address registers (pmpaddr)
don't have to implement all the bits. On a SiFive FU740, for
example, PMP only implements bits 33:0, i.e. a 34 bit address.
PMPs are just packed with all kinds of special cases. There are
no requirements that you read back what you wrote to the pmpaddr
registers. The earlier PMP code would die if the read did not
match the write, but, since pmpaddr are WARL, that was not
correct. An SoC can just decide it only does 4096-byte
granularity, on TOR PMP types, and that is your problem if you
wanted finer granulatiry. SoC's don't have to implement all the
high order bits either.
And, to reiterate, there is no requirement about which of the pmpcfg
are implemented. Implementing just pmpcfg15 is allowed.
The coreboot SBI code was written before PMP existed. In order
for coreboot SBI code to work, this patch is necessary.
With this change, a simple S-mode payload that calls SBI putchar
works:
1:
li a7, 1
li a0, 48
ecall
j 1b
Without this change, it will not work.
Getting this to build on RV32 required changes to the API,
as it was incorrect. In RV32, PMP entries are 34 bits.
Hence, the setup_pmp needed to accept u64. So,
uinptr_t can not be used, as on 32 bits they are
only 32 bit numbers. The internal API uses uintptr_t,
but the exported API uses u64, so external code
does not have to think about right shifts on base
and size.
Errors are detected: an error in base and size will result
in a BIOS_EMERG print, but not a panic.
Boots not bricks if possible.
There are small changes to the internal API to reduce
stack pressure: there's no need to have two pmpcfg_t
on the stack when one will do.
TEST: Linux now boots partly on the SiFive unmatched. There are
changes in flight on the coreboot SBI that will allow Linux to
boot further, but they are out of scope for this patch.
Currently, clk_ignore_unused is required, this requires a
separate patch.
Change-Id: I6edce139d340783148cbb446cde004ba96e67944
Signed-off-by: Ronald G Minnich <rminnich@gmail.com>
Reviewed-on: https://review.coreboot.org/c/coreboot/+/81153
Tested-by: build bot (Jenkins) <no-reply@coreboot.org>
Reviewed-by: Philipp Hug <philipp@hug.cx>
Diffstat (limited to 'src/soc/intel/common/block')
0 files changed, 0 insertions, 0 deletions