Sadin Nurkic wrote:
As raised by me a few days ago, and after re-reading this thread I'm
almost certain there have been new leaks introduced in 0.10.13 and
0.10.14.
There may be more reasons why memory is consumed, I don't really know.
Could you clarify a little bit on how exactly the memory usage grows
for short lived TCP sessions? Shouldn't the memory clear once the TCP
session has cleared?
Well, how do you know when a TCP session is really finished? How do you
know that this was really the last packet? There may be more packets
(e.g. retransmitted packets) rushing in later even though the stream is
already "logically" closed.
I do understand that ethereal needs to keep data
in memory naturally, but I do not understand why so much memory is
used up so quickly on such a small stream. Especially as all I'm
trying to do is filter and write to a file. (as explained before I
must use the read filter as I filter on a field deep within the
tunneled packet).
"As all I'm trying to do is filter (using a field deep within the
tunneled packet) ..." that's an interesting sentence in itself ;-)
Please keep in mind that lot's of dissectors improving in functionality,
but they need to use more memory for this functionality.
Where can I find some info on how to profile the memory usage as I'm
certain that the behaviour has significantly changed in 0.10.13+?
What I'm not trying to say is that we don't have any memory leaks (how
should one know). However, as it's often reported that Ethereal has
memory leaks as it's using up all memory if you just let it run long
enough, caused by required memory consumption, so we're getting a bit
"numb" on that ear :-(
It's impossible to say which causes the memory consumption in your case
and if it's a leak or required without further investigation.
When I remember correct, some people used valgrind to detect memory leaks.
I'm really not an expert on this, maybe someone else can give you advise
how to continue your research here ...
Regards, ULFL