[SLOF] [PATCH 3/4] fbuffer: Implement MRMOVE as an accelerated primitive

Nikunj A Dadhania nikunj at linux.vnet.ibm.com
Thu Sep 10 22:00:02 AEST 2015

Thomas Huth <thuth at redhat.com> writes:

> On 09/09/15 13:37, Nikunj A Dadhania wrote:
>> Thomas Huth <thuth at redhat.com> writes:
>>> On 09/09/15 13:05, Nikunj A Dadhania wrote:
>>>> Thomas Huth <thuth at redhat.com> writes:
>>>>> On 09/09/15 08:45, Nikunj A Dadhania wrote:
>>>>>> Thomas Huth <thuth at redhat.com> writes:
>>>>>>> On 03/08/15 17:53, Thomas Huth wrote:
>>>>>>>> On 03/08/15 12:37, Nikunj A Dadhania wrote:
>>>>>>>>> Thomas Huth <thuth at redhat.com> writes:
>>>>>>>>>> The character drawing function fb8-draw-character uses "mrmove"
>>>>>>>>>> (which moves main memory contents to IO memory) to copy the data
>>>>>>>>>> of the character from main memory to the frame buffer. However,
>>>>>>>>>> the current implementation of "mrmove"  performs quite badly on
>>>>>>>>>> board-qemu since it triggers a hypercall for each memory access
>>>>>>>>>> (e.g. for each 8 bytes that are transfered).
>>>>>>>>>> But since the KVMPPC_H_LOGICAL_MEMOP hypercall can transfer bigger
>>>>>>>>>> regions at once, we can accelerate the character drawing quite a
>>>>>>>>>> bit by simply mapping the "mrmove" to the same macro that is
>>>>>>>>>> already used for the "rmove". For keeping board-js2x in sync,
>>>>>>>>>> this patch also transforms the "mrmove" for js2x into primitives.
>>>>>>>>>> Signed-off-by: Thomas Huth <thuth at redhat.com>
>>>>>>>>> I dont have a js2x handy, did you test this on js2x?
>>>>>>>> No, sorry, unfortunately, I also was not able to test this on js2x yet.
>>>>>>>> I still have a YDL PowerStation somewhere in a corner of my flat, but
>>>>>>>> it's currently not set up ... will do that one day when I got enough
>>>>>>>> spare time again, but that won't happen within the next few weeks (KVM
>>>>>>>> forum's ahead!). So if there are problems, I'll fix them up as soon as I
>>>>>>>> got the PowerStation running again (and I guess there might be other
>>>>>>>> problems, too, since board-js2x hardly got any testing within the last
>>>>>>>> months/years, I think).
>>>>>>> FWIW, I've recently dusted off my PowerStation and gave it a try...
>>>>>>> the HEAD of the SLOF master branch is unfortunately quite broken on js2x
>>>>>>> nowadays (I'll try to send some first fixes later), but I was able to
>>>>>>> cherry-pick the fbuffer acceleration patches to an older level of SLOF
>>>>>>> from 2012 which was still working nicely.
>>>>>> TCG seem to be broken with this new series, git bisect pointed to:
>>>>>> 59a135e fbuffer: Implement MRMOVE as an accelerated primitive
>>>>>> I havent looked into detail of why its failing though.
>>>>> It seems to work fine for me here (SLOF master and QEMU master
>>>>> branch). Which version of QEMU did you use? 
>>>> QEMU: fc04a73 Merge remote-tracking branch 'remotes/pmaydell/tags/pull-target-arm-20150908' into staging
>>>> SLOF: 811277a version: update to 20150813
>>>>> Which command line parameters?
>>>>         ./ppc64-softmmu/qemu-system-ppc64 -machine pseries -m 2048  -serial stdio
>>> I've just tried the very same versions, with the very same parameters,
>>> and it is working nicely here! Very strange...
>>>>         VNC server running on `'
>>>> 	SLOF **********************************************************************
>>>> 	QEMU Starting
>>>> 	 Build Date = Sep  9 2015 16:30:15
>>>> 	 FW Version = git-811277ac91f674a9
>>>> 	 Press "s" to enter Open Firmware.
>>>> 	Cannot open file : fbuffer.fs
>>> Sounds like the boot_rom.bin / slof.bin maybe has not been built
>>> correctly ... what do you get when you run:
>>> strings boot_rom.bin | grep fbuffer.fs
>> $ strings boot_rom.bin | grep fbuffer.fs
>> include fbuffer.fs
>> 0fbuffer.fs
> That looks ok, fbuffer.fs is available in the romfs ...
> I've did some more tests, and it occurs for me, too, when I use the ATC
> compiler instead of my normal cross-compiler from my distro!

I am using:

$ powerpc64-linux-gnu-gcc --version
powerpc64-linux-gnu-gcc (GCC) 5.1.1 20150422 (Red Hat Cross 5.1.1-1)

More information about the SLOF mailing list