Ethereal-users: Re: [Ethereal-users] Three big problems

Note: This archive is from the project's previous web site, ethereal.com. This list is no longer active.

From: "Ronnie Sahlberg" <ronnie_sahlberg@xxxxxxxxxxxxxx>
Date: Tue, 5 Nov 2002 08:02:36 +1100
----- Original Message -----
From: "McNutt, Justin M."
Sent: Tuesday, November 05, 2002 2:56 AM
Subject: RE: [Ethereal-users] Three big problems


>Again, editcap only gives a single segment.  It does not break up the file
into many arbitrary-sized or -length chunks.  See 'man >split' for a text
version of what I'm talking about.

1, Loop oer the huge capture with editcap
2, Dont create such huge captures.
3, Develop a new version of editcap that can do the kind of split you look
for.

>> > > 3)  I need to be able to use at least 1000 files in the ring
>> > > buffer (although about 60,000 would be much better).  This
>> > > one is by far the most important, since if I can get past the
>> > > 10 file limitation I can worry about item 1) above and make
>> > > do, but with only 10 files in the ring buffer I'm screwed.
>>
>> That many files is not supported by ethereal, neither do I
>> think it will be.
>
>What is your basis for this assumption?

There are reasons why it may not be a really good idea to capture for
several days at a time.
Even at reasonably slow rates such as 75Mbit/s every packet will still add
to the state buildup inside
ethereal until you reach a point where memory is exhausted.
I.e. ethereal will become slower and slower as memory is exhausted on the
system.

Stoping and restarting the capture is an efficient method to control the
amount of state buildup.

The state in question is the stuff ethereal remembers internally about every
packet it sees, information
such as when the packet was received, size of it,   if it is an ONCRPC
packet, remember the XID and some more things,
and a whole bunch of other information that a stateful analyzer needs to
keep between packets.


>> > > The deal is that I need to run a perpetual packet capture on
>> > > a 75+ Mb link and I need to buffer to hold at least 3 days
>> > > worth of data.  I have the disk space and the server hardware
>> > > to do this, but I'm limited by Ethereal.
>>
>> I do these things from time to time in the lab when it might
>> take several
>> days of auto testing
>> to recreate a situation.
>> When I need to do this I usually implement it something like
>> #!/bin/sh
>> while true;do
>>     filename=`date +"%Y%m%d-%H%M%S"`
>>     tethereal -s 1500 -i eth0 -w $filename -c 200000
>>     gzip $filename &
>> done
>
>The time between the end of one capture and the beginning of the next -
especially since you're compressing the last file before >you begin the next
capture - can be a serious problem at 75+ Mb.  We want the capture to be
uninterrupted.

'&' means, run command in background.
Thus, the script will NOT waint until the previous capture is compressed
before it starts a new one.
There will only be a very short gap between each capture where a few packets
have been lost.

You may want to compress the captures even for 75Mbit/s links since the
uncompressed data from such a link is ~8MByte/s -> ~3GByte/hour ->
75GByte/day -> ~250GByte for the capture to over just over 3 days.



>> Or use snoop or tcpdump instead of tethereal.
>
>Do these apps have more flexible ring buffers (or something similar)?  The
reason we're using tethereal is because of this feature.  >If a while() loop
in some script were sufficient, we could use any packet capturing engine in
the world.

No, they do not have ringbuffers at all, but they are both less stateful, so
the state buildup is less than for tethereal.
Thus since they are less stateful, they can capture for longer than what a
more stateful tool can.


>Again, the modifications I need don't appear - to me anyway - to be that
significant.  If I am wrong, what is the basis for the 10->file limit in the
first place?  On the other issue, if t/ethereal can stop after NN seconds or
MM frames, why can't it rotate the >capture files based on the same
criteria?

Probably an arbitrary limit that the developer of the ringbuffer function
picked as good enough for most situations.

You have the source, use it.