Virtex 4 Avnet Mini-Module Networking.

Mohammad Sadegh Sadri mamsadegh at hotmail.com
Fri Jun 22 02:05:09 EST 2007



very interesting

the provided xps project from xapp941 results 230Mbits but your custome system is 83.0Mbits, yes?

just, are you sure the ppc core has the same core clock frequency in both designs? I mean may be the xapp941 is running at 300MHz but yours at 100MHz. 

And what about a custom system with 9.1SP2? does that gives the same performance?

Finally If possible please release the test results for TCP streams. ( 9.1 no SP and 9.1 SP2 )

thanks



----------------------------------------
> Date: Thu, 21 Jun 2007 18:53:49 +0400
> From: akonovalov at ru.mvista.com
> To: mamsadegh at hotmail.com
> CC: linuxppc-embedded at ozlabs.org; kashiwagi at co-nss.co.jp; glenn.g.hart at us.westinghouse.com
> Subject: Re: Virtex 4 Avnet Mini-Module Networking.
> 
> Mohammad Sadegh Sadri wrote:
> > I'm wondering where are the original developers of TEMAC driver these days, I have not seen any posts from any of andrei
> > nor grant in the recent weeks. may be they are involved in other projects.
> 
> As for me - no, not until the end of this week..
> 
> I am puzzled with the following:
> 
> I've got ML405 recently. There is a prebuilt bitstream with TEMAC in SGDMA mode for this board made by Xilinx (XAPP941).
> With the TEMAC driver from EDK 9.1SP2 and the XAPP941 bitstream I've got (no jumbo frames):
> 
>    XTemac: using sgDMA mode.
>    XTemac: using TxDRE mode
>    XTemac: using RxDRE mode
>    XTemac: buffer descriptor size: 32768 (0x8000)
>    XTemac: (buffer_descriptor_init) phy: 0x3cf8000, virt: 0xff100000, size: 0x8000
>    eth%d: XTemac: PHY detected at address 7.
>    eth0: Xilinx TEMAC #0 at 0x80000000 mapped to 0xC5060000, irq=0
>    eth0: XTemac id 1.0f, block id 5, type 8
> 
>    eth0: XTemac: Options: 0xb8f2
>    eth0: XTemac: We renegotiated the speed to: 1000
>    eth0: XTemac: speed set to 1000Mb/s
>    eth0: XTemac: Send Threshold = 16, Receive Threshold = 2
>    eth0: XTemac: Send Wait bound = 1, Receive Wait bound = 1
> 
>    root at 192.168.119.11:~/netperf-2.4.1/src# ./netperf -l 30 -H 192.168.119.1 -i 10,
>    2 -I 99,10 -t UDP_STREAM -- -m 32768 -s 262144
>    UDP UNIDIRECTIONAL SEND TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.11
>    9.1 (192.168.119.1) port 0 AF_INET : ±5.0% @ 99% conf.
>    Socket  Message  Elapsed      Messages
>    Size    Size     Time         Okay Errors   Throughput
>    bytes   bytes    secs            #      #   10^6bits/sec
> 
>    217088   32768   30.00       26391      0     230.60
>    107520           30.00       26377            230.48
> 
> Not that bad for this setup.
> 
> With xps (EDK 9.1 without any SPs) I've tried creating exact the same design.
> So that exact the same kernel etc could be used for the both bitstreams.
> And with the self-made one I see exact the same output with the only difference:
> 
>    root at 192.168.119.11:~/netperf-2.4.1/src# ./netperf -l 30 -H 192.168.119.1 -i 10,
>    2 -I 99,10 -t UDP_STREAM -- -m 32768 -s 262144
>    UDP UNIDIRECTIONAL SEND TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.11
>    9.1 (192.168.119.1) port 0 AF_INET : ±5.0% @ 99% conf.
>    Socket  Message  Elapsed      Messages
>    Size    Size     Time         Okay Errors   Throughput
>    bytes   bytes    secs            #      #   10^6bits/sec
> 
>    217088   32768   30.00        9593      0      83.82
>    107520           30.00        9586             83.76
> 
> :o(
> 
> Rebuilding the XAPP941 project with EDK 9.1.00 still leaves 230Mb/sec.
> I also pulled the additional timing constraints from XAPP941's data/system.ucf
> into the self-made data/system.ucf, but this haven't change anything.
> 
> I've run out of ideas, and switched from TEMAC to other Xilinx stuff for a while.
> 
> 
> Thanks,
> Andrei

_________________________________________________________________
Connect to the next generation of MSN Messenger 
http://imagine-msn.com/messenger/launch80/default.aspx?locale=en-us&source=wlmailtagline


More information about the Linuxppc-embedded mailing list