Ethereal-users: RE: [Ethereal-users] Three big problems

Note: This archive is from the project's previous web site, ethereal.com. This list is no longer active.

From: "McNutt, Justin M." <McNuttJ@xxxxxxxxxxxx>
Date: Mon, 4 Nov 2002 09:56:15 -0600
> > > 1)  I have a 2GB capture file that I need to split.  I don't
> > > particularly care if it's split into chunks of NN packets or
> > > files of some size, but I certainly can't analyze the file as
> > > it is.  Second best would be a suggestion for an algorithm I
> > > could implement in Perl that would allow me to use editcap to
> > > split the file without knowing how many packets are in the
> > > file.  (e.g.  "while <some test>, editcap -r infile
> > > next.outfile <next chunk>").
> 
> Try editcap, it might do what you want.

Again, editcap only gives a single segment.  It does not break up the file into many arbitrary-sized or -length chunks.  See 'man split' for a text version of what I'm talking about.

> > > 3)  I need to be able to use at least 1000 files in the ring
> > > buffer (although about 60,000 would be much better).  This
> > > one is by far the most important, since if I can get past the
> > > 10 file limitation I can worry about item 1) above and make
> > > do, but with only 10 files in the ring buffer I'm screwed.
> 
> That many files is not supported by ethereal, neither do I 
> think it will be.

What is your basis for this assumption?

> > > The deal is that I need to run a perpetual packet capture on
> > > a 75+ Mb link and I need to buffer to hold at least 3 days
> > > worth of data.  I have the disk space and the server hardware
> > > to do this, but I'm limited by Ethereal.
> 
> I do these things from time to time in the lab when it might 
> take several
> days of auto testing
> to recreate a situation.
> When I need to do this I usually implement it something like
> #!/bin/sh
> while true;do
>     filename=`date +"%Y%m%d-%H%M%S"`
>     tethereal -s 1500 -i eth0 -w $filename -c 200000
>     gzip $filename &
> done

The time between the end of one capture and the beginning of the next - especially since you're compressing the last file before you begin the next capture - can be a serious problem at 75+ Mb.  We want the capture to be uninterrupted.

> Or use snoop or tcpdump instead of tethereal.

Do these apps have more flexible ring buffers (or something similar)?  The reason we're using tethereal is because of this feature.  If a while() loop in some script were sufficient, we could use any packet capturing engine in the world.

Again, the modifications I need don't appear - to me anyway - to be that significant.  If I am wrong, what is the basis for the 10-file limit in the first place?  On the other issue, if t/ethereal can stop after NN seconds or MM frames, why can't it rotate the capture files based on the same criteria?

--J