[PATCH v1] powerpc/powernv/pci: fix PE in re-used pci_dn for pnv_pci_enable_device_hook
虞陆铭
luming.yu at shingroup.cn
Tue Aug 20 18:32:37 AEST 2024
>Looks like the latest upstream kernel has sovled the problem:
>echo 1 > /sys/bus/pci/devices/0001:0d:00.0/remove
> echo 1 > /sys/bus/pci/rescan
>
>[ 230.399969] pci_bus 0001:0d: Configuring PE for bus
>[ 230.399974] pci 0001:0d : [PE# fb] Secondary bus 0x000000000000000d associated with PE#fb
>[ 230.400084] pci 0001:0d:00.0: Configured PE#fb
>[ 230.400086] pci 0001:0d : [PE# fb] Setting up 32-bit TCE table at 0..80000000
>[ 230.400698] pci 0001:0d : [PE# fb] Setting up window#0 0..3fffffffff pg=10000
>[ 230.400703] pci 0001:0d : [PE# fb] Enabling 64-bit DMA bypass
>[ 230.400716] pci 0001:0d:00.0: Adding to iommu group 1
>[ 230.400917] mmiotrace: ioremap_*(0x3fe080800000, 0x2000) = 00000000ecf53fa1
>[ 230.401088] nvme nvme0: pci function 0001:0d:00.0
>[ 230.401098] nvme 0001:0d:00.0: enabling device (0140 -> 0142)
>[ 230.401146] mmiotrace: ioremap_*(0x3fe080804000, 0x400) = 000000003e6b2e5b
>[ 230.429600] nvme nvme0: D3 entry latency set to 10 seconds
>[ 230.429896] mmiotrace: ioremap_*(0x3fe080804000, 0x400) = 000000006f3fd92d
>[ 230.439138] nvme nvme0: 63/0/0 default/read/poll queues
>
>the original problem in pci rescan path after hot remove like below is gone!
>pci 0020:0e:00.0: BAR 0: assigned [mem 0x3fe801820000-0x3fe80182ffff 64bit]
> nvme nvme1: pci function 0020:0e:00.0
> nvme 0020:0e:00.0 pci_enable_device() blocked, no PE assigned.
>
>Probably fixed by the commit:
>5ac129cdb50b4efda59ee5ea7c711996a3637b34
>Author: Joel Stanley <joel at jms.id.au>
>Date: Tue Jun 13 14:22:00 2023 +0930
>powerpc/powernv/pci: Remove ioda1 support
>
>that was merged mainline later than the upstream kernel I saw the problem last time I came
>up with the patch.
>
>Given the facts changed, the patch proposal became even more trivial now.
>I won't push it for upstream inclusion now. Instead, I will keep it in my local test queue for a while.
For users sticking to a 4.18 based production kernel that is infeasable to do a massive back port of the upstream code
for the small problem,
I just tested the patch , It works as expected :
[ 122.116913] pci 0001:14:0e.0: PCI bridge to [bus 1a]
[ 122.117359] nvme nvme0: pci function 0001:0d:00.0
[ 122.117377] pci 0001:0d:00.0: [PE# fd] Associated device to PE
[ 122.117489] nvme 0001:0d:00.0: enabling device (0140 -> 0142)
the device software offlined is indeed back , due to the patch.
through there is a warning like below need to be sovled
[ 122.117497] nvme 0001:0d:00.0: Warning: IOMMU dma not supported: mask 0xffffffffffffffff, table unavailable
[ 122.117502] nvme nvme0: Removing after probe failure status: -12
[root at localhost ~]# lspci -v -s 0001:0d:00.0
0001:0d:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller PM173X (prog-if 02 [NVM Express])
Subsystem: Samsung Electronics Co Ltd Device a813
Physical Slot: PCIE#3
Device tree node: /sys/firmware/devicetree/base/pciex at 3fffe40100000/pci at 0/pci at 0/pci at 9/mass-storage at 0
Flags: fast devsel, IRQ 494, NUMA node 0
Memory at 3fe080800000 (64-bit, non-prefetchable) [size=32K]
Expansion ROM at 3fe080810000 [virtual] [disabled] [size=64K]
Capabilities: [40] Power Management version 3
Capabilities: [70] Express Endpoint, MSI 00
Capabilities: [b0] MSI-X: Enable- Count=64 Masked-
Capabilities: [100] Advanced Error Reporting
Capabilities: [148] Device Serial Number 64-40-50-11-9a-38-25-00
Capabilities: [168] Alternative Routing-ID Interpretation (ARI)
Capabilities: [178] Secondary PCI Express
Capabilities: [198] Physical Layer 16.0 GT/s <?>
Capabilities: [1c0] Lane Margining at the Receiver <?>
Capabilities: [1e8] Single Root I/O Virtualization (SR-IOV)
Capabilities: [3a4] Data Link Feature <?>
Kernel modules: nvme
so, it sounds like a good patch to me. : -)
>Cheers!
>Luming
------------------ Original ------------------
From: "虞陆铭"<luming.yu at shingroup.cn>;
Date: Tue, Nov 28, 2023 02:43 PM
To: "linuxppc-dev"<linuxppc-dev at lists.ozlabs.org>; "linux-kernel"<linux-kernel at vger.kernel.org>; "mpe"<mpe at ellerman.id.au>; "npiggin"<npiggin at gmail.com>; "christophe.leroy"<christophe.leroy at csgroup.eu>;
Cc: "luming.yu"<luming.yu at gmail.com>; "ke.zhao"<ke.zhao at shingroup.cn>; "dawei.li"<dawei.li at shingroup.cn>; "shenghui.qu"<shenghui.qu at shingroup.cn>; "虞陆铭"<luming.yu at shingroup.cn>;
Subject: [PATCH v1] powerpc/powernv/pci: fix PE in re-used pci_dn for pnv_pci_enable_device_hook
after hot remove a pcie deivce with pci_dn having pnp_php driver attached,
pci rescan with echo 1 > /sys/bus/pci/rescan could fail with error
message like:
pci 0020:0e:00.0: BAR 0: assigned [mem 0x3fe801820000-0x3fe80182ffff
64bit]
nvme nvme1: pci function 0020:0e:00.0
nvme 0020:0e:00.0 pci_enable_device() blocked, no PE assigned.
It appears that the pci_dn object is reused with only pe_number
clobbered in the case. And a simple call to pnv_ioda_setup_dev_PE should
get PE number back and solve the problem.
Signed-off-by: Luming Yu <luming.yu at shingroup.cn>
---
v0 -> v1:
-clean up garbage leaked in git format patch that stems from git clone and checkout
-conflicts of files in local windows filesystem with weird cases and names quriks.
---
arch/powerpc/platforms/powernv/pci-ioda.c | 11 +-
1 files changed, 9 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c
index 28fac4770073..9d7add79ee3d 100644
--- a/arch/powerpc/platforms/powernv/pci-ioda.c
+++ b/arch/powerpc/platforms/powernv/pci-ioda.c
@@ -2325,11 +2325,18 @@ static resource_size_t pnv_pci_default_alignment(void)
static bool pnv_pci_enable_device_hook(struct pci_dev *dev)
{
struct pci_dn *pdn;
+ struct pnv_ioda_pe *pe;
pdn = pci_get_pdn(dev);
- if (!pdn || pdn->pe_number == IODA_INVALID_PE) {
- pci_err(dev, "pci_enable_device() blocked, no PE assigned.\n");
+ if (!pdn)
return false;
+
+ if (pdn->pe_number == IODA_INVALID_PE) {
+ pe = pnv_ioda_setup_dev_PE(dev);
+ if (!pe) {
+ pci_err(dev, "pci_enable_device() blocked, no PE assigned.\n");
+ return false;
+ }
}
return true;
More information about the Linuxppc-dev
mailing list