[SLOF] [PATCH] pci: Avoid 32-bit prefetchable memory area if possible

Alexey Kardashevskiy aik at ozlabs.ru
Fri Jul 21 17:45:42 AEST 2017


On 21/07/17 17:23, Thomas Huth wrote:
> On 21.07.2017 05:51, Alexey Kardashevskiy wrote:
>> On 20/07/17 18:47, Thomas Huth wrote:
>>> On 20.07.2017 08:54, Alexey Kardashevskiy wrote:
>>>> On 17/07/17 20:05, Thomas Huth wrote:
>>>>> On 17.07.2017 08:18, Alexey Kardashevskiy wrote:
>>>>>> On 14/07/17 19:45, Thomas Huth wrote:
>>>>>>> PCI bridges can only have one prefetchable memory area. If we are
>>>>>>> already using 64-bit prefetchable memory regions, we can not use
>>>>>>> a dedicated 32-bit prefetchable memory region anymore. In that
>>>>>>> case the 32-bit BARs should all be located in the 32-bit non-
>>>>>>> prefetchable memory space instead.
>>>>>>>
>>>>>>> Signed-off-by: Thomas Huth <thuth at redhat.com>
>>>>>>> ---
>>>>>>>  board-qemu/slof/pci-phb.fs | 16 +++++++++++-----
>>>>>>>  slof/fs/pci-properties.fs  |  7 ++++++-
>>>>>>>  2 files changed, 17 insertions(+), 6 deletions(-)
>>>>>>>
>>>>>>> diff --git a/board-qemu/slof/pci-phb.fs b/board-qemu/slof/pci-phb.fs
>>>>>>> index b7bf9cf..926efba 100644
>>>>>>> --- a/board-qemu/slof/pci-phb.fs
>>>>>>> +++ b/board-qemu/slof/pci-phb.fs
>>>>>>> @@ -253,12 +253,9 @@ setup-puid
>>>>>>>              THEN
>>>>>>>           ENDOF
>>>>>>>           2000000 OF                             \ 32-bit memory space?
>>>>>>> -            decode-64 pci-next-mem !            \ Decode mem base address
>>>>>>> +            decode-64 dup >r pci-next-mmio !    \ Decode base address
>>>>>>>              decode-64 drop                      \ Forget the parent address
>>>>>>> -            decode-64 2 / dup >r                \ Decode and calc size/2
>>>>>>> -            pci-next-mem @ + dup pci-max-mem !  \ and calc max mem address
>>>>>>> -            dup pci-next-mmio !                 \ which is the same as MMIO base
>>>>>>> -            r> + pci-max-mmio !                 \ calc max MMIO address
>>>>>>> +            decode-64 r> + pci-max-mmio !       \ calc max MMIO address
>>>>>>>           ENDOF
>>>>>>>           3000000 OF                             \ 64-bit memory space?
>>>>>>>              decode-64 dup >r pci-next-mem64 !
>>>>>>> @@ -270,6 +267,15 @@ setup-puid
>>>>>>>     ( prop-addr prop-len )
>>>>>>>     2drop
>>>>>>>  
>>>>>>> +   \ If we do not have 64-bit prefetchable memory, split the 32-bit space:
>>>>>>
>>>>>> When is this ^^^^^^^^^^^^^^^^^^^ possible?
>>>>>
>>>>> This happens if you either use SLOF with an older version of QEMU, or
>>>>> start a recent QEMU with an older machine type, e.g. "-M pseries-2.1".
>>>>> That means, we've still got to support this for running older VMs on
>>>>> current QEMU.
>>>>>
>>>>>>> +   pci-next-mem64 @ 0= IF
>>>>>>> +      pci-next-mmio @ pci-next-mem !            \ Start of 32-bit prefetchable
>>>>>>> +      pci-max-mmio @ pci-next-mmio @ - 2 /      \ Calculate new size
>>>>>>> +      pci-next-mmio @ +                         \ The middle of the area
>>>>>>> +      dup pci-max-mem !
>>>>>>> +      pci-next-mmio !
>>>>>>> +   THEN
>>>>>>> +
>>>>>>>     phb-debug? IF
>>>>>>>       ." pci-next-io   = " pci-next-io @ . cr
>>>>>>>       ." pci-max-io    = " pci-max-io  @ . cr
>>>>>>> diff --git a/slof/fs/pci-properties.fs b/slof/fs/pci-properties.fs
>>>>>>> index b7bb534..6f8f013 100644
>>>>>>> --- a/slof/fs/pci-properties.fs
>>>>>>> +++ b/slof/fs/pci-properties.fs
>>>>>>> @@ -159,7 +159,12 @@
>>>>>>>  \ Setup a prefetchable 32bit BAR and return its size
>>>>>>>  : assign-mem32-bar ( bar-addr -- 4 )
>>>>>>>          dup pci-bar-size-mem32          \ fetch size
>>>>>>> -        pci-next-mem                    \ var to change
>>>>>>> +        \ Do we have a dedicated 32-bit prefetchable area? If not, use MMIO
>>>>>>> +        pci-next-mem @ IF
>>>>>>> +            pci-next-mem
>>>>>>> +        ELSE
>>>>>>> +            pci-next-mmio
>>>>>>> +        THEN
>>>>>>
>>>>>>
>>>>>> The commit log explains this chunk but not the other chunks.
>>>>>
>>>>> We've got to avoid to create that fake "pci-next-mem" region to be
>>>>> able to check pci-next-mem != 0 here. Shall I respin the patch
>>>>> and elaborate this in the commit message?
>>>>>
>>>>>> How did you test the change to get different behaviour?
>>>>>
>>>>> Run QEMU with "-M pseries-2.1"
>>>>
>>>> I did not go that far, I tried this:
>>>>
>>>> qemu-system-ppc64 \
>>>> -nodefaults \
>>>> -chardev stdio,id=STDIO0,signal=off,mux=on \
>>>> -device spapr-vty,id=svty0,chardev=STDIO0,reg=0x71000100 \
>>>> -mon id=MON0,chardev=STDIO0,mode=readline -vnc "localhost:100" \
>>>> -device pci-bridge,id=pci.0_1,bus=pci.0,addr=1.0,chassis_nr=1 \
>>>> -device VGA,id=VGA0,bus=pci.0,addr=2.0 -enable-kvm \
>>>> -smp 8,threads=8 \
>>>> -machine pseries
>>>>
>>>> Note that both bridge and VGA are on the root bus, the bridge goes first.
>>>> Without this patch, VNC shows what is expected, with the patch - it is a
>>>> black screen.
>>>>
>>>> "pci: Translate PCI addresses to host addresses at the end of map-in" does
>>>> not change anything here.
>>>>
>>>> Ideas?
>>>
>>> You also need my "Fix pci-bridge-set-mem-base and
>>> pci-bridge-set-mem-limit" patch in this case. The problem is that the
>>> old pci-bridge-set-mem-limit function messes around with pci-next-mem -
>>> even if that memory region should not be used at all:
>>>
>>> : pci-bridge-set-mem-limit ( addr -- )
>>>         pci-next-mem @ 100000 +            \ add space for hot-plugging
>>>         100000 #aligned                    \ align to 1MB boundary
>>>         dup pci-next-mem !                 \ and write it back
>>>
>>> So if pci-next-mem was initially set to 0, it is set to the non-sense
>>> value 0x100000 after the bridge has been scanned, so the code in
>>> assign-mem32-bar sets the BAR to a wrong value.
>>
>>
>> Ok, that patch helps, thanks. I'd still like to get a cleaner version of that.
>>
>> Another observation - the vga does not work if the bridge it is connected
>> to has multifunction=on:
>>
>> -device
>> pci-bridge,id=pci.0_1,bus=pci.0,addr=1.0,chassis_nr=1,multifunction=on \
>> -device VGA,id=VGA0,bus=pci.0_1,addr=2.0
>>
>> Multifunction bridges do not make much sense (or at all) but the difference
>> in behaviour is still not clear to me, ideas?
> 
> I think the VGA device is not discovered at all in this case.
> SLOF only scans the first function of a bridge -

Nah, other devices are in separate slots.

> since multifunction
> bridges do not exist in the wild, do they?

Ah, I know, multifunction devices have 0x80 the config space header type so
0x81 is either not valid or just not supported. Never mind then.

> Anyway, this is a completely
> different topic and certainly should not be handled in this patch here.

Definitely, that was just an observation.

btw I posted some rework of that "Fix pci-bridge-set-mem-base and
pci-bridge-set-mem-limit", please have a look and say if it makes sense.
Thanks.


-- 
Alexey


More information about the SLOF mailing list