managing storage problems.

classic Classic list List threaded Threaded
13 messages Options
Reply | Threaded
Open this post in threaded view
|

managing storage problems.

shai katzir
Hello,

I have a problem to deal with releasing packets when a receive mbox gets full.
In my case, i have a binded UDP socket which doesn't do receive, but a remote computer keep
sending packets to that socket. In this case, the receive mbox of that socket gets filled quickly.
But, the lwip does nothing to deal with it. when the lwip does mbox_post it doesn't check if the mbox is full, and it relies that the packet will be fetched in the future, and will be released then.

what can i do to free those packets once the mbox is full?

thanks,
shai
Reply | Threaded
Open this post in threaded view
|

RE : managing storage problems.

Frédéric BERNON
This is a lwIP problem: you should never got a full mbox, due to the lwIP design. But, mainly, when a mbox is full, this is the netconn::recvmbox, or the tcpip.c::mbox (other mailboxes are used like "semaphores").

I will suppose you use last CVS HEAD. Since sys_mbox_post can't return an error, the workaround (without changing lwIP code) should be to implement the "free" inside your sys_mbox_post implementation. The main problem is to know what kind of "free" to do:

- tcpip.c::mbox only contains pbuf
- netconn::recvmbox can contains netbuf for UDP/RAW netconn, and pbuf for TCP.

If you know the full mbox is always an UDP one, I think you can directly use netbuf_delete on the pointer you have to post. It's not a clean solution, but should work.
 
====================================
Frédéric BERNON
HYMATOM SA
Chef de projet informatique
Microsoft Certified Professional
Tél. : +33 (0)4-67-87-61-10
Fax. : +33 (0)4-67-70-85-44
Email : [hidden email]
Web Site : http://www.hymatom.fr 
====================================
P Avant d'imprimer, penser à l'environnement
 


-----Message d'origine-----
De : lwip-users-bounces+frederic.bernon=[hidden email] [mailto:lwip-users-bounces+frederic.bernon=[hidden email]] De la part de shai katzir
Envoyé : mardi 30 octobre 2007 09:13
À : [hidden email]
Objet : [lwip-users] managing storage problems.



Hello,

I have a problem to deal with releasing packets when a receive mbox gets full. In my case, i have a binded UDP socket which doesn't do receive, but a remote computer keep sending packets to that socket. In this case, the receive mbox of that socket gets filled quickly. But, the lwip does nothing to deal with it. when the lwip does mbox_post it doesn't check if the mbox is full, and it relies that the packet will be fetched in the future, and will be released then.

what can i do to free those packets once the mbox is full?

thanks,
shai
--
View this message in context: http://www.nabble.com/managing-storage-problems.-tf4716993.html#a13484100
Sent from the lwip-users mailing list archive at Nabble.com.



_______________________________________________
lwip-users mailing list
[hidden email] http://lists.nongnu.org/mailman/listinfo/lwip-users

_______________________________________________
lwip-users mailing list
[hidden email]
http://lists.nongnu.org/mailman/listinfo/lwip-users

=?iso-8859-1?Q?Fr=E9d=E9ric_BERNON=2Evcf?= (810 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: RE : managing storage problems.

goldsimon@gmx.de
Frédéric BERNON schrieb:
> This is a lwIP problem: you should never got a full mbox, due to the lwIP design. But, mainly, when a mbox is full, this is the netconn::recvmbox, or the tcpip.c::mbox (other mailboxes are used like "semaphores").
>  
But I think that's a problem we could solve. As I understand it, it's
not a problem for TCP because the recvmbox of TCP will never hold more
data than the TCP receive window (it still could be a problem if data is
coming in many small packets -> many pbufs on the recvmbox, but if data
is coming in small packets, the application supposedly will have the
time to process the data).

For UDP, this can be solved much like the TCP send way by having two limits
a) number of bytes in the recvmbox (like tcp_pcb->snd_buf) and
b) number of pbufs in the recvmbox (like tcp_pcb->snd_queuelen)

I think this is a common problem for UDP stacks (other stacks implement
SO_RCVBUF, for example) and should be implemented in lwIP also.

The tcpip-mbox is a different thing, because it is very special to lwIP.
I have code to stop filling the tcpip-mbox when it is too full (which
disables my RX interrupt also). Then I get informed of the tcpip_thread
when there is enough space again, so I can turn on RX interrupts again.
This isn't very portable, but maybe just informing the app/driver about
high-/low-water-mark could be a more general solution (to prevent having
to modify the core code for this purpose).

Anyone else needing this except me and shai?

> If you know the full mbox is always an UDP one, I think you can directly use netbuf_delete on the pointer you have to post. It's not a clean solution, but should work.
>  
That's really not a clean solution since memp_free() doesn't check
whether the pointer to be freed is a pool element (memory region is not
checked), which could lead to bugs that are hard to locate if the tcpip
mbox really gets full some time.

Simon


_______________________________________________
lwip-users mailing list
[hidden email]
http://lists.nongnu.org/mailman/listinfo/lwip-users
Reply | Threaded
Open this post in threaded view
|

RE : RE : managing storage problems.

Frédéric BERNON
In reply to this post by Frédéric BERNON
I'm agree, we should solve that in the stack, and SO_RCVBUF is a good solution for me (before/after 1.3.0 ?? To be honest, I'm not sure). I'm also in flavor to redesign mailbox model, by adding a "size" parameter to sys_mbox_new (by example, the netconn::mbox doesn't really need to be larger than 1, of course, a semaphore could be better), and by adding return values to sys_mbox_post (to enable to handle the full mbox case). The main "problem" is to share between application thread and tcpip_thread a "byte counter". I think it will be better to include it directly in the mailbox model (a sys_mbox_getcount, and something like a "bytesize" parameter in sys_mbox_post), to avoid execution context switch like for tcp (with "do_recv" calls), and an unprotect access to netconn::recv_avail for udp and raw (I don't think the current code is very thread-safe).

Since it's something which could take time to define, I think that shai could test the workaround I propose him in a first time.

 
 
====================================
Frédéric BERNON
HYMATOM SA
Chef de projet informatique
Microsoft Certified Professional
Tél. : +33 (0)4-67-87-61-10
Fax. : +33 (0)4-67-70-85-44
Email : [hidden email]
Web Site : http://www.hymatom.fr 
====================================
P Avant d'imprimer, penser à l'environnement
 


-----Message d'origine-----
De : lwip-users-bounces+frederic.bernon=[hidden email] [mailto:lwip-users-bounces+frederic.bernon=[hidden email]] De la part de [hidden email]
Envoyé : mardi 30 octobre 2007 11:49
À : Mailing list for lwIP users
Objet : Re: RE : [lwip-users] managing storage problems.


Frédéric BERNON schrieb:
> This is a lwIP problem: you should never got a full mbox, due to the
> lwIP design. But, mainly, when a mbox is full, this is the netconn::recvmbox, or the tcpip.c::mbox (other mailboxes are used like "semaphores").
>  
But I think that's a problem we could solve. As I understand it, it's
not a problem for TCP because the recvmbox of TCP will never hold more
data than the TCP receive window (it still could be a problem if data is
coming in many small packets -> many pbufs on the recvmbox, but if data
is coming in small packets, the application supposedly will have the
time to process the data).

For UDP, this can be solved much like the TCP send way by having two limits
a) number of bytes in the recvmbox (like tcp_pcb->snd_buf) and
b) number of pbufs in the recvmbox (like tcp_pcb->snd_queuelen)

I think this is a common problem for UDP stacks (other stacks implement
SO_RCVBUF, for example) and should be implemented in lwIP also.

The tcpip-mbox is a different thing, because it is very special to lwIP.
I have code to stop filling the tcpip-mbox when it is too full (which
disables my RX interrupt also). Then I get informed of the tcpip_thread
when there is enough space again, so I can turn on RX interrupts again.
This isn't very portable, but maybe just informing the app/driver about
high-/low-water-mark could be a more general solution (to prevent having
to modify the core code for this purpose).

Anyone else needing this except me and shai?

> If you know the full mbox is always an UDP one, I think you can
> directly use netbuf_delete on the pointer you have to post. It's not a clean solution, but should work.
>  
That's really not a clean solution since memp_free() doesn't check
whether the pointer to be freed is a pool element (memory region is not
checked), which could lead to bugs that are hard to locate if the tcpip
mbox really gets full some time.

Simon


_______________________________________________
lwip-users mailing list
[hidden email] http://lists.nongnu.org/mailman/listinfo/lwip-users

_______________________________________________
lwip-users mailing list
[hidden email]
http://lists.nongnu.org/mailman/listinfo/lwip-users

=?iso-8859-1?Q?Fr=E9d=E9ric_BERNON=2Evcf?= (810 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: RE : managing storage problems.

shai katzir
In reply to this post by goldsimon@gmx.de

I'm using the 1.2.0 stable version, and i would preffer to stay with it, at least until 1.3.0 will be out..

Currently I'm using the UDP only, but i intend to use the TCP or maybe other features in the future.
therfore, freeing the packet within the sys_mbox_post won't work for me.

In this case, I'm afraid i'll have to make slight changes in the lwip I'm using.
Changing the mbox (as you mentioned) to return errors, is quite heavy and may be risky for me
to add to the stack.
I'm thinking about changing the operational logic of the UDP stack, by not allowing any packets to
be received (into the recvmbox) if the socket is not trying to receive (by recvfrom..).
for this i need to know two things:
1. where can this problem accure except from the udp recvmbox (you mentioned that it can't happen
   in the tcp, but can happen in the tcpip reqs mbox?)
2. where should i drop this packet? how can i check if the socket is on receive and drop the packet?

as for simon suggestion:
>For UDP, this can be solved much like the TCP send way by having two limits
>a) number of bytes in the recvmbox (like tcp_pcb->snd_buf) and
>b) number of pbufs in the recvmbox (like tcp_pcb->snd_queuelen)
do you mean adding the current situation of the recvmbox (total len, and cur len) to the pcb as an additional attribute?
I think that if you want to be able to get this information in the stack (the mbox situation) it should be
an integral part of the sys_mbox like BERNON mentioned (adding sys_mbox_len or sys_mbox_is_full).

shai
Reply | Threaded
Open this post in threaded view
|

Re: RE : RE : managing storage problems.

Jonathan Larmour
In reply to this post by Frédéric BERNON
Frédéric BERNON wrote:
> I'm agree, we should solve that in the stack, and SO_RCVBUF is a good
> solution for me (before/after 1.3.0 ?? To be honest, I'm not sure). I'm
> also in flavor to redesign mailbox model, by adding a "size" parameter
> to sys_mbox_new (by example, the netconn::mbox doesn't really need to be
> larger than 1, of course, a semaphore could be better),

Be warned that some OSes have fixed mailbox sizes. Any design would need to
 allow for that. Embedded systems favour static objects with deterministic
access. Dynamic sizing loses that.

For example, eCos has both sorts so it _can_ be done, but the variable size
mailbox implementation is bigger + slower + non-deterministic.

> and by adding return values to sys_mbox_post (to enable to handle the full
> mbox case).

That's definitely the way forward, and I would have thought would even be
the best workaround for Shai - not to do everything and check errors for
every sys_mbox_post call, but just the calls he cares about (i.e. in
recv_udp). I don't think it would be too difficult to just do that.

Or an alternative workaround is to add a sys_arch_is_mbox_full() function
in the sys_arch part of his port and call that to check before doing the
sys_mbox_post in recv_udp. Even less clean, but perhaps less code to change.

> The main "problem" is to share between application thread and
> tcpip_thread a "byte counter". I think it will be better to include it
> directly in the mailbox model (a sys_mbox_getcount, and something like a
> "bytesize" parameter in sys_mbox_post),

Not every mailbox needs that. I would have thought that, where needed, it
could just be kept alongside the mailbox but seperate (i.e. in the struct
netconn).

> Since it's something which could take time to define,

Yes, after 1.3.0 I think, and I can't think of any suitable workarounds
before 1.3.0.

Jifl
--
eCosCentric Limited      http://www.eCosCentric.com/     The eCos experts
Barnwell House, Barnwell Drive, Cambridge, UK.       Tel: +44 1223 245571
Registered in England and Wales: Reg No 4422071.
------["The best things in life aren't things."]------      Opinions==mine


_______________________________________________
lwip-users mailing list
[hidden email]
http://lists.nongnu.org/mailman/listinfo/lwip-users
Reply | Threaded
Open this post in threaded view
|

RE : RE : RE : managing storage problems.

Frédéric BERNON
In reply to this post by Frédéric BERNON
>Be warned that some OSes have fixed mailbox sizes. Any design would need to  allow for that. Embedded systems favour static objects with deterministic access. Dynamic sizing loses that. For example, eCos has both sorts so it _can_ be done, but the variable size mailbox implementation is bigger + slower + non-deterministic.

About mailboxes, I would see three kinds (for me, "dynamic" and "variable" are different kinds):
1/ fixed size (you don't provide any size when you create them, the max size is defined by build options)
2/ dynamic size (you can provide a max. size when you create them)
3/ variable size (you don't have real max. size, it's a kind of list)

What I suggested is the 2/ (and not the 3/). Have a netconn::mbox with the same size that netconn::recvmbox doesn't seem useful. I think that adding a size parameter to sys_mbox_new is a good solution (like we have add the stacksize to sys_thread_new). If your OS doesn't support that, you don't have to use it. Since there is no lot of sys_mbox_new calls (I count 8 in cvs head), and since these calls are not on critical paths (send/recv calls), I stay in flavor to change it (after 1.3.0).

>That's definitely the way forward, and I would have thought would even be the best workaround for Shai - not to do everything and check errors for every sys_mbox_post call, but just the calls he cares about (i.e. in recv_udp). I don't think it would be too difficult to just do that.

Yes, but choose the limit with an option like proposed in "[task #7421] Implement SO_RCVBUF" is also fast to implement (since we already have a netconn::recv_avail which count bytes in the netconn::recvmbox, even if it's not very well protected)

>Not every mailbox needs that. I would have thought that, where needed, it could just be kept alongside the mailbox but seperate (i.e. in the struct netconn).

It's what it does for netconn::recv_avail and netconn::recvmbox.

 
====================================
Frédéric BERNON
HYMATOM SA
Chef de projet informatique
Microsoft Certified Professional
Tél. : +33 (0)4-67-87-61-10
Fax. : +33 (0)4-67-70-85-44
Email : [hidden email]
Web Site : http://www.hymatom.fr 
====================================
P Avant d'imprimer, penser à l'environnement
 


-----Message d'origine-----
De : lwip-users-bounces+frederic.bernon=[hidden email] [mailto:lwip-users-bounces+frederic.bernon=[hidden email]] De la part de Jonathan Larmour
Envoyé : mardi 30 octobre 2007 16:06
À : Mailing list for lwIP users
Objet : Re: RE : RE : [lwip-users] managing storage problems.


Frédéric BERNON wrote:
> I'm agree, we should solve that in the stack, and SO_RCVBUF is a good
> solution for me (before/after 1.3.0 ?? To be honest, I'm not sure).
> I'm also in flavor to redesign mailbox model, by adding a "size"
> parameter to sys_mbox_new (by example, the netconn::mbox doesn't
> really need to be larger than 1, of course, a semaphore could be
> better),

Be warned that some OSes have fixed mailbox sizes. Any design would need to  allow for that. Embedded systems favour static objects with deterministic access. Dynamic sizing loses that.

For example, eCos has both sorts so it _can_ be done, but the variable size mailbox implementation is bigger + slower + non-deterministic.

> and by adding return values to sys_mbox_post (to enable to handle the
> full mbox case).

That's definitely the way forward, and I would have thought would even be the best workaround for Shai - not to do everything and check errors for every sys_mbox_post call, but just the calls he cares about (i.e. in recv_udp). I don't think it would be too difficult to just do that.

Or an alternative workaround is to add a sys_arch_is_mbox_full() function in the sys_arch part of his port and call that to check before doing the sys_mbox_post in recv_udp. Even less clean, but perhaps less code to change.

> The main "problem" is to share between application thread and
> tcpip_thread a "byte counter". I think it will be better to include it
> directly in the mailbox model (a sys_mbox_getcount, and something like
> a "bytesize" parameter in sys_mbox_post),

Not every mailbox needs that. I would have thought that, where needed, it could just be kept alongside the mailbox but seperate (i.e. in the struct netconn).

> Since it's something which could take time to define,

Yes, after 1.3.0 I think, and I can't think of any suitable workarounds before 1.3.0.

Jifl
--
eCosCentric Limited      http://www.eCosCentric.com/     The eCos experts
Barnwell House, Barnwell Drive, Cambridge, UK.       Tel: +44 1223 245571
Registered in England and Wales: Reg No 4422071.
------["The best things in life aren't things."]------      Opinions==mine


_______________________________________________
lwip-users mailing list
[hidden email] http://lists.nongnu.org/mailman/listinfo/lwip-users

_______________________________________________
lwip-users mailing list
[hidden email]
http://lists.nongnu.org/mailman/listinfo/lwip-users

=?iso-8859-1?Q?Fr=E9d=E9ric_BERNON=2Evcf?= (810 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: RE : RE : RE : managing storage problems.

Jonathan Larmour
Frédéric BERNON wrote:

>> Be warned that some OSes have fixed mailbox sizes. Any design would
>> need to  allow for that. Embedded systems favour static objects with
>> deterministic access. Dynamic sizing loses that. For example, eCos has
>> both sorts so it _can_ be done, but the variable size mailbox
>> implementation is bigger + slower + non-deterministic.
>
> [snip]
> Have a netconn::mbox with
> the same size that netconn::recvmbox doesn't seem useful. I think that
> adding a size parameter to sys_mbox_new is a good solution (like we have
> add the stacksize to sys_thread_new). If your OS doesn't support that,
> you don't have to use it. Since there is no lot of sys_mbox_new calls (I
> count 8 in cvs head), and since these calls are not on critical paths
> (send/recv calls), I stay in flavor to change it (after 1.3.0).

Don't get me wrong - I agree. I was just saying that whatever the solution
is, it cannot absolutely rely on requesting a mailbox of size X really
giving you a mailbox of size exactly X.

(By the way, you mean favor, not flavor which is quite a different word :-)).

Jifl
--
eCosCentric Limited      http://www.eCosCentric.com/     The eCos experts
Barnwell House, Barnwell Drive, Cambridge, UK.       Tel: +44 1223 245571
Registered in England and Wales: Reg No 4422071.
------["The best things in life aren't things."]------      Opinions==mine


_______________________________________________
lwip-users mailing list
[hidden email]
http://lists.nongnu.org/mailman/listinfo/lwip-users
Reply | Threaded
Open this post in threaded view
|

RE : RE : RE : RE : managing storage problems.

Frédéric BERNON
In reply to this post by Frédéric BERNON
>Don't get me wrong - I agree. I was just saying that whatever the solution is, it cannot absolutely rely on requesting a mailbox of size X really giving you a mailbox of size exactly X.

Sure, so, we are agree

>(By the way, you mean favor, not flavor which is quite a different word :-)).

Eh eh :)
 
====================================
Frédéric BERNON
HYMATOM SA
Chef de projet informatique
Microsoft Certified Professional
Tél. : +33 (0)4-67-87-61-10
Fax. : +33 (0)4-67-70-85-44
Email : [hidden email]
Web Site : http://www.hymatom.fr 
====================================
P Avant d'imprimer, penser à l'environnement
 


-----Message d'origine-----
De : lwip-users-bounces+frederic.bernon=[hidden email] [mailto:lwip-users-bounces+frederic.bernon=[hidden email]] De la part de Jonathan Larmour
Envoyé : mardi 30 octobre 2007 17:54
À : Mailing list for lwIP users
Objet : Re: RE : RE : RE : [lwip-users] managing storage problems.


Frédéric BERNON wrote:

>> Be warned that some OSes have fixed mailbox sizes. Any design would
>> need to  allow for that. Embedded systems favour static objects with
>> deterministic access. Dynamic sizing loses that. For example, eCos
>> has both sorts so it _can_ be done, but the variable size mailbox
>> implementation is bigger + slower + non-deterministic.
>
> [snip]
> Have a netconn::mbox with
> the same size that netconn::recvmbox doesn't seem useful. I think that
> adding a size parameter to sys_mbox_new is a good solution (like we
> have add the stacksize to sys_thread_new). If your OS doesn't support
> that, you don't have to use it. Since there is no lot of sys_mbox_new
> calls (I count 8 in cvs head), and since these calls are not on
> critical paths (send/recv calls), I stay in flavor to change it (after
> 1.3.0).
Don't get me wrong - I agree. I was just saying that whatever the solution is, it cannot absolutely rely on requesting a mailbox of size X really giving you a mailbox of size exactly X.

(By the way, you mean favor, not flavor which is quite a different word :-)).

Jifl
--
eCosCentric Limited      http://www.eCosCentric.com/     The eCos experts
Barnwell House, Barnwell Drive, Cambridge, UK.       Tel: +44 1223 245571
Registered in England and Wales: Reg No 4422071.
------["The best things in life aren't things."]------      Opinions==mine


_______________________________________________
lwip-users mailing list
[hidden email] http://lists.nongnu.org/mailman/listinfo/lwip-users

_______________________________________________
lwip-users mailing list
[hidden email]
http://lists.nongnu.org/mailman/listinfo/lwip-users

=?iso-8859-1?Q?Fr=E9d=E9ric_BERNON=2Evcf?= (810 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: RE : RE : RE : RE : managing storage problems.

shai katzir

finally, I decided to give the sys_mbox_post function a success boolean return as discussed and it worked just fine.
still, i only added checks for udp receving and raw receive . the problem is that when the post fails,
there has to be some error handling and reverse actions before giving up on posting.
for example, in the tcp receive, before posting the packet to the recvmbox, there are updates to the pcb and "tcp_update", this updastes should be removed. I wasn't sure how to remove then and therefore i neglected the tcp check for failed posting.
Moreover, in the udp receive, before posting the packet to the recvmbox there's a call to the conn callback, the process done in the callback should be reversed too , but again, I didn't know what changes have been done.
Can I move the callbacks to be done after a successfull posting, or there is a problem doing that in a multithreaded system ( where the fetch could be done before the callback executes)?

shai
Reply | Threaded
Open this post in threaded view
|

Re: RE : RE : RE : RE : managing storage problems.

Jonathan Larmour
shai katzir wrote:

>
> finally, I decided to give the sys_mbox_post function a success boolean
> return as discussed and it worked just fine.
> still, i only added checks for udp receving and raw receive . the problem is
> that when the post fails,
> there has to be some error handling and reverse actions before giving up on
> posting.
> for example, in the tcp receive, before posting the packet to the recvmbox,
> there are updates to the pcb and "tcp_update", this updastes should be
> removed. I wasn't sure how to remove then and therefore i neglected the tcp
> check for failed posting.
> Moreover, in the udp receive, before posting the packet to the recvmbox
> there's a call to the conn callback, the process done in the callback should
> be reversed too , but again, I didn't know what changes have been done.
> Can I move the callbacks to be done after a successfull posting, or there is
> a problem doing that in a multithreaded system ( where the fetch could be
> done before the callback executes)?

Yes the fetch could be done before the callback.

For UDP, there should be no harm for your workaround to make the callbacks
after the post, but all the other things should remain before the post,
and be undone after if it failed.

For TCP, I don't see any code which updates the PCB or any mention of
"tcp_update" in the whole stack. Where did you get your sources from?

Jifl
--
eCosCentric Limited      http://www.eCosCentric.com/     The eCos experts
Barnwell House, Barnwell Drive, Cambridge, UK.       Tel: +44 1223 245571
Registered in England and Wales: Reg No 4422071.
------["The best things in life aren't things."]------      Opinions==mine


_______________________________________________
lwip-users mailing list
[hidden email]
http://lists.nongnu.org/mailman/listinfo/lwip-users
Reply | Threaded
Open this post in threaded view
|

RE: RE : RE : RE : managing storage problems.

olivier.hachet2
In reply to this post by Frédéric BERNON
Hello all,
 
Could you please unregister me from the lwip mailing list.
Thanks a lot

Regards,

Olivier

-----Message d'origine-----
De : lwip-users-bounces+olivier.hachet2=[hidden email]
[mailto:lwip-users-bounces+olivier.hachet2=[hidden email]
]De la part de Jonathan Larmour
Envoyé : mardi 30 octobre 2007 17:54
À : Mailing list for lwIP users
Objet : Re: RE : RE : RE : [lwip-users] managing storage problems.


Frédéric BERNON wrote:

>> Be warned that some OSes have fixed mailbox sizes. Any design would
>> need to  allow for that. Embedded systems favour static objects with
>> deterministic access. Dynamic sizing loses that. For example, eCos has
>> both sorts so it _can_ be done, but the variable size mailbox
>> implementation is bigger + slower + non-deterministic.
>
> [snip]
> Have a netconn::mbox with
> the same size that netconn::recvmbox doesn't seem useful. I think that
> adding a size parameter to sys_mbox_new is a good solution (like we have
> add the stacksize to sys_thread_new). If your OS doesn't support that,
> you don't have to use it. Since there is no lot of sys_mbox_new calls (I
> count 8 in cvs head), and since these calls are not on critical paths
> (send/recv calls), I stay in flavor to change it (after 1.3.0).

Don't get me wrong - I agree. I was just saying that whatever the solution
is, it cannot absolutely rely on requesting a mailbox of size X really
giving you a mailbox of size exactly X.

(By the way, you mean favor, not flavor which is quite a different word
:-)).

Jifl
--
eCosCentric Limited      http://www.eCosCentric.com/     The eCos experts
Barnwell House, Barnwell Drive, Cambridge, UK.       Tel: +44 1223 245571
Registered in England and Wales: Reg No 4422071.
------["The best things in life aren't things."]------      Opinions==mine


_______________________________________________
lwip-users mailing list
[hidden email]
http://lists.nongnu.org/mailman/listinfo/lwip-users


_______________________________________________
lwip-users mailing list
[hidden email]
http://lists.nongnu.org/mailman/listinfo/lwip-users
Reply | Threaded
Open this post in threaded view
|

AW: RE : RE : RE : managing storage problems.

Goldschmidt Simon
> Hello all,
>

Hello Oliver,

> Could you please unregister me from the lwip mailing list.
> Thanks a lot

Could you please unregister yourself from the lwip mailing list:
http://lists.nongnu.org/mailman/listinfo/lwip-users

Regards,
Simon
:-)

>
> Regards,
>
> Olivier


_______________________________________________
lwip-users mailing list
[hidden email]
http://lists.nongnu.org/mailman/listinfo/lwip-users