Using lwIP in Xilinx Gigabit System Reference Design (GSRD)

classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|

Using lwIP in Xilinx Gigabit System Reference Design (GSRD)

llandre
Hi all,

I tested the GSRD project on gigabit point-to-point link (PC and ML403
board) and I achieved great performances. Since I can accept lower
performances, I'm wondering if it is possible to replace the Treck
TCP/IP stack with lwIP. What do you think?
I think the most challenging issue is to reduce as much as possible the
memory requirements in order to keep all code and data in processor
cache. IIUC this is the key trick - along with hardware CRC computation
- that allows the software stack to handle such a high data throughput.
Any comments or suggestions will be appreciated.


TIA,
llandre

DAVE Electronics System House - R&D Department
web:   http://www.dave-tech.it
email: r&[hidden email]


_______________________________________________
lwip-devel mailing list
[hidden email]
http://lists.nongnu.org/mailman/listinfo/lwip-devel
Reply | Threaded
Open this post in threaded view
|

Re: Using lwIP in Xilinx Gigabit System Reference Design (GSRD)

Kieran Mansley
On Fri, 2006-10-20 at 18:35 +0200, llandre wrote:
> Any comments or suggestions will be appreciated.

Try the lwip-users mailing list.  There's much more likely to be people
there who have tried something similar.

Thanks

Kieran




_______________________________________________
lwip-devel mailing list
[hidden email]
http://lists.nongnu.org/mailman/listinfo/lwip-devel
Reply | Threaded
Open this post in threaded view
|

Re: Using lwIP in Xilinx Gigabit System Reference Design (GSRD)

Sathya Thammanur
In reply to this post by llandre
HI Ilandre,
Technically I dont see anything that would prevent such an effort to happen. The key thing is to write the temac adapter that works with lwIP stack. Since the new version of lwIP does allow for hardware checksum computation, it should not be an issue to get the design working. If you have Treck adapter code, then you can model the lwIP solution along the same lines. Note that in order to get the Sockets API working you will need kernel features support and hence you might have to use the xilkernel that ships with EDK.

Alternately, if using RAW API (simpler to port than Sockets), then the port should be relatively simple.

I would be interested in knowing about your efforts if you are able to get this working.

Sathya

On 10/20/06, llandre <[hidden email]> wrote:
Hi all,

I tested the GSRD project on gigabit point-to-point link (PC and ML403
board) and I achieved great performances. Since I can accept lower
performances, I'm wondering if it is possible to replace the Treck
TCP/IP stack with lwIP. What do you think?
I think the most challenging issue is to reduce as much as possible the
memory requirements in order to keep all code and data in processor
cache. IIUC this is the key trick - along with hardware CRC computation
- that allows the software stack to handle such a high data throughput.
Any comments or suggestions will be appreciated.


TIA,
llandre

DAVE Electronics System House - R&D Department
web:   http://www.dave-tech.it
email: [hidden email]


_______________________________________________
lwip-devel mailing list
[hidden email]
http://lists.nongnu.org/mailman/listinfo/lwip-devel


_______________________________________________
lwip-devel mailing list
[hidden email]
http://lists.nongnu.org/mailman/listinfo/lwip-devel
Reply | Threaded
Open this post in threaded view
|

Re: Using lwIP in Xilinx Gigabit System Reference Design (GSRD)

llandre
Hi Sathya,

> Technically I dont see anything that would prevent such an effort to
> happen. The key thing is to write the temac adapter that works with lwIP
> stack.
I agree with you if we just want a port that works. However if we need
to get the best achievable performances - that's my case - I think the
things are more complicated. IIUC, in fact, the GSRD project exploits
all the BRAM memory to extend the PPC cache (16kB+16kB+32kB=64kB). This
way all the stack code and data fit in the cache. This allows the
processor to execute at maximum speed all the TCP/IP code and thus to
handle a 800 Mb/s TCP connection over gigabit link. So my question is:
is it possible to reduce the memory footprint required by lwIP in order
to make it fit in 64kB?

> Since the new version of lwIP does allow for hardware checksum
> computation, it should not be an issue to get the design working.
I agree.

> If you
> have Treck adapter code, then you can model the lwIP solution along the
> same lines. Note that in order to get the Sockets API working you will
> need kernel features support and hence you might have to use the
> xilkernel that ships with EDK.
>
> Alternately, if using RAW API (simpler to port than Sockets), then the
> port should be relatively simple.
As I said before I need to reduce the footprint as much as possible. So
I'm going to use RAW API.



--
llandre

DAVE Electronics System House - R&D Department
web:   http://www.dave-tech.it
email: r&[hidden email]


_______________________________________________
lwip-devel mailing list
[hidden email]
http://lists.nongnu.org/mailman/listinfo/lwip-devel