diff options
author | Nico Huber <nico.h@gmx.de> | 2021-03-07 01:39:18 +0100 |
---|---|---|
committer | Patrick Georgi <pgeorgi@google.com> | 2021-03-15 06:04:38 +0000 |
commit | 968ef759887b4da09115a6b2d2e2afa223091792 (patch) | |
tree | f9fff4f429080a168255596eadd5bb65d80cbd45 /src/soc/intel/common/block/pcie | |
parent | ec2cbecf9368878ba6c2fbb42d02213ef8d04a91 (diff) |
pciexp_device: Rewrite LTR configuration
I was bugged by spurious "Failed to enable LTR" messages for years.
Looking at the the current algorithm, it is flawed in multiple ways:
* It looks like the author didn't know they implemented a
recursive algorithm (pciexp_enable_ltr()) inside another
recursive algorithm (pciexp_scan_bridge()). Thus, at every
tree level, everything is run again for the whole sub-
tree.
* LTR is enabled no matter if `.set_ltr_max_latencies` is
implemented or not. Leaving the endpoints' LTR settings
at 0: They are told to always report zero tolerance.
In theory, depending on the root-complex implementation,
this may result in higher power consumption than without
LTR messages.
* `.set_ltr_max_latencies` is only considered for the direct
parent of a device. Thus, even with it implemented, an
endpoint below a (non-root) bridge may suffer from the 0
settings as described above.
* Due to the double-recursive nature, LTR is enabled starting
with the endpoints, then moving up the tree, while the PCIe
spec tells us to do it in the exact opposite order.
With the current implementation of pciexp_scan_bridge(), it is
hard to hook anything in that runs for each device from top to
bottom. So the proposed solution still adds some redundancy:
First, for every device that uses pciexp_scan_bus(), we enable
LTR if possible (see below). Then, when returning from the bus-
scanning recursion, we enable LTR for every device and configure
the maximum latencies (if supported). The latter runs again on
all bridges, because it's hard to know if pciexp_scan_bus() was
used for them.
When to enable LTR:
* For all devices that implement `.set_ltr_max_latencies`.
* For all devices below a bridge that has it enabled already.
Change-Id: I2c5b8658f1fc8cec15e8b0824464c6fc9bee7e0e
Signed-off-by: Nico Huber <nico.h@gmx.de>
Reviewed-on: https://review.coreboot.org/c/coreboot/+/51328
Reviewed-by: Angel Pons <th3fanbus@gmail.com>
Reviewed-by: Tim Wawrzynczak <twawrzynczak@chromium.org>
Tested-by: build bot (Jenkins) <no-reply@coreboot.org>
Diffstat (limited to 'src/soc/intel/common/block/pcie')
-rw-r--r-- | src/soc/intel/common/block/pcie/pcie.c | 10 |
1 files changed, 4 insertions, 6 deletions
diff --git a/src/soc/intel/common/block/pcie/pcie.c b/src/soc/intel/common/block/pcie/pcie.c index 346c0ff225..f7f0902ab7 100644 --- a/src/soc/intel/common/block/pcie/pcie.c +++ b/src/soc/intel/common/block/pcie/pcie.c @@ -45,16 +45,14 @@ static void pch_pcie_init(struct device *dev) pci_write_config16(dev, PCI_SEC_STATUS, reg16); } -static void pcie_set_ltr_max_latencies(struct device *dev, unsigned int offset) +static void pcie_get_ltr_max_latencies(u16 *max_snoop, u16 *max_nosnoop) { - /* Set max snoop and non-snoop latency for the SOC */ - pci_write_config32(dev, offset, - PCIE_LTR_MAX_NO_SNOOP_LATENCY_3146US << 16 | - PCIE_LTR_MAX_SNOOP_LATENCY_3146US); + *max_snoop = PCIE_LTR_MAX_SNOOP_LATENCY_3146US; + *max_nosnoop = PCIE_LTR_MAX_NO_SNOOP_LATENCY_3146US; } static struct pci_operations pcie_ops = { - .set_ltr_max_latencies = pcie_set_ltr_max_latencies, + .get_ltr_max_latencies = pcie_get_ltr_max_latencies, .set_subsystem = pci_dev_set_subsystem, }; |