Smallest possible PBUF_POOL_SIZE and MEM_SIZE (UDP only), PBUF allocation

classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|

Smallest possible PBUF_POOL_SIZE and MEM_SIZE (UDP only), PBUF allocation

Hamme, Janis
Hello lwIP Users,

I'm currently configuring/porting lwIP for an embedded application with quite constrained memory size (NO_SYS mode).
Since I only need IPv6/UDP with multicast I was wondering if setting the statically allocated memory as low as

#define PBUF_POOL_SIZE            1
#define MEM_SIZE                        800

could cause any problems. Since each packet would be processed immediately I can't think of any situation where I would need to store more than one incoming and one outgoing packet at a time. Queueing shouldn't occur anywhere - or am I missing something? I've attached the relevant parts of my lwipopts.h at the bottom.

My other question is what would be the most efficient way to allocate PBUFs for outgoing packets. I'm using a statically allocated buffer to generate outgoing packets and I could easily increase the buffer size to hold all other headers in contiguous memory. But I must be sure that the buffer can be reused once udp_sendto(...) returns. Is there any way to achieve that or is the overhead of using PBUF_REF with chained header PBUFs negligible?

Regards
Janis Hamme

lwipopts.h enabled features:

#define LWIP_IPV4 0
#define LWIP_IPV6 1
#define LWIP_RAW 0
#define LWIP_DNS 0
#define LWIP_TCP 0
#define LWIP_UDP 1

#define LWIP_ICMP6 1
#define LWIP_IPV6_MLD 1
#define LWIP_IPV6_REASS 0
#define LWIP_IPV6_FRAG 0
#define LWIP_IPV6_SEND_ROUTER_SOLICIT 0
#define LWIP_ND6_ALLOW_RA_UPDATES 0
#define LWIP_ND6_QUEUEING 0

#define LWIP_NETCONN 0
#define LWIP_SOCKET 0

_______________________________________________
lwip-users mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/lwip-users
Reply | Threaded
Open this post in threaded view
|

Re: Smallest possible PBUF_POOL_SIZE and MEM_SIZE (UDP only), PBUF allocation

goldsimon@gmx.de


Hamme, Janis" wrote:
> Since each packet would be processed
>immediately I can't think of any situation where I would need to store
>more than one incoming and one outgoing packet at a time. Queueing
>shouldn't occur anywhere - or am I missing something?

If your netif driver doesn't queue, tx packets can still be queued for a short time if the MAC address for the next hop needs to be resolved. For multicast, this should not be required, so your assumption might hold.


>My other question is what would be the most efficient way to allocate
>PBUFs for outgoing packets. I'm using a statically allocated buffer to
>generate outgoing packets and I could easily increase the buffer size
>to hold all other headers in contiguous memory. But I must be sure that
>the buffer can be reused once udp_sendto(...) returns. Is there any way
>to achieve that or is the overhead of using PBUF_REF with chained
>header PBUFs negligible?

That would involve some work, I think. Using a PBUF_RAM should be the most simple solution, I guess? In your simple setup you could also use a PBUF_POOL for tx pbufs to get rid of the heap...


Simon

_______________________________________________
lwip-users mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/lwip-users
Reply | Threaded
Open this post in threaded view
|

Re: Smallest possible PBUF_POOL_SIZE and MEM_SIZE (UDP only), PBUF allocation

Hamme, Janis
Hello Simon,

Thanks for your response! I just checked nd6.c and it seems I misunderstood how the LWIP_ND6_QUEUEING option works. I thought disabling that option would disable queuing entirely and not just limit queueing to a single packet. Having an option to just broadcast packets on the link layer while the cache entry is still in INCOMPLETE state would be a neat option for future lwIP versions (although that isn't standard conform). I also thought the neighbor cache would be populated by looking at incoming packets, but indeed the RFC says it's not.

The network protocol I'm using is not limited to multicast. Nodes reply with unicast packets after a multicast request has been received. Queuing just a single packet might not be enough, so maybe I'll have to patch a "broadcast if unknown" feature in.

I'll guess I just use PBUF_REF for outgoing packets. Fixed sized headers should not cause too much heap fragmentation. Getting rid of the heap entirely will probably not work because it's still needed for sending all kinds of ICMPv6/ND packets.

Janis

-----Original Message-----
From: lwip-users <lwip-users-bounces+jhamme=[hidden email]> On Behalf Of goldsimon
Sent: Friday, October 19, 2018 2:14 PM
To: Mailing list for lwIP users <[hidden email]>
Subject: Re: [lwip-users] Smallest possible PBUF_POOL_SIZE and MEM_SIZE (UDP only), PBUF allocation



Hamme, Janis" wrote:
> Since each packet would be processed
>immediately I can't think of any situation where I would need to store
>more than one incoming and one outgoing packet at a time. Queueing
>shouldn't occur anywhere - or am I missing something?

If your netif driver doesn't queue, tx packets can still be queued for a short time if the MAC address for the next hop needs to be resolved. For multicast, this should not be required, so your assumption might hold.


>My other question is what would be the most efficient way to allocate
>PBUFs for outgoing packets. I'm using a statically allocated buffer to
>generate outgoing packets and I could easily increase the buffer size
>to hold all other headers in contiguous memory. But I must be sure that
>the buffer can be reused once udp_sendto(...) returns. Is there any way
>to achieve that or is the overhead of using PBUF_REF with chained
>header PBUFs negligible?

That would involve some work, I think. Using a PBUF_RAM should be the most simple solution, I guess? In your simple setup you could also use a PBUF_POOL for tx pbufs to get rid of the heap...


Simon

_______________________________________________
lwip-users mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/lwip-users

_______________________________________________
lwip-users mailing list
[hidden email]
https://lists.nongnu.org/mailman/listinfo/lwip-users