Ethereal-dev: Re: [Ethereal-dev] Display filter as stop condition

Note: This archive is from the project's previous web site, ethereal.com. This list is no longer active.

Date: Mon, 27 Oct 2003 09:30:10 -0800 (PST)
Ronnie Sahlberg wrote:
> He was going to make a small change to the arguments used as suggested by
> Gerald and resubmit it.
> 
> Hopefully it will go in soon so someone can enhance it to optionally take
> capture filters as well.

Right you are.  The command-line syntax is:
  -a "rfilter:READ FILTER STRING"

This leaves open the *future* addition of:
  -a "cfilter:CAPTURE FILTER STRING"

But I do have a problem.  When I DON'T provide a capture filter (to
cut down the incoming rate), it does seem to run far behind and
miss lots of packets.  (Even though the cpu is mostly idle and it's
not taking all that much memory!)  Then when it exits, it thinks there
were 0 dropped packets!  If the user mode code takes too long to process
each packet, I thought it would detect overruns.

When I read in the captured file (without using ANY filters at all),
it seems to take very long time, even though the cpu STAYS mostly idle.
WHAT is going on???

FYI - I'm running on RedHat 8.0 on x86.
___

To respond to a few other comments:

Ulf Lamping wrote:
> I would love to add this feature into the GTK part...
> So maybe it should be called "trigger" and not "stop condition" ?!?

I did look at the code, but decided it would involve too much learning
for me to accomplish.  (What???  Me, a software developer actually
having to LEARN SOMETHING???????  Alas, a very full schedule forces
me to make choices that I don't like.)  So I'll let you have the
pleasure.  :-)

I *love* the idea of it being a "trigger" that could perform an
arbitrary function ("stop after N more packets" being just one of
them).  *sigh*  I wish I had more time to spend on this.


Ronnie Sahlberg wrote:
> ... display filters ... require all the packets to be fully
> disected.  This ... starts consuming more and more memory while
> tethereal runs.  ... capture filters does not ... cause
> the internal state in tethereal to start building up.

That sounds bad.  I did not realize that the longer tethereal runs,
the more memory it will consume if display filters are used.  Why
is this?  Memory leak bug?  Or just the nature of display filters?

I was hoping to let tethereal run for possibly days at a time
waiting for a rare event to happen.  This sounds unfeasible if
it grows without bound.  Is there a solution short of using capture
filters?

As far as implementing stop condition with a capture filter, I
can't see how to do that.  Capture filters are handled within
the pcap lib with pcap_setfilter().  My understanding is that
it performs the packet parsing and filtering in kernel space.
Is there a user-mode interface to do it?

Ideally the capture filter stop condition would be applied in kernel
space with only one packet parse.  Is there a way to communicate back
to user space that the trigger has happened?  I assume this would
require a change to the pcap lib.


Eichert, Diana wrote:
> What happen's if you set a stop filter and a capture filter?
> Does it the packet endup getting parsed twice, which would 
> take more CPU overhead?  Should other filtering be disabled 
> when setting a stop trigger, or you just assume the user 
> isn't going to shoot themselves in the foot?

The user can set a capture filter with "-f" (to reduce the packets
being captured), and he can also set a display filter with "-R" (to
further reduce the packets being displayed and/or recorded), AND he
can set a stop filter.  All three are independant.  And yes, it will
involve multiple parsings.

I don't think there's any way around doing a capture parse and at
least one display filter parse - it's my impression that those are
completely different kinds of parses.  And for efficiency purposes,
it is important to use a capture filter to reduce the number of
times per second that the display filter is applied.

Perhaps somebody more familliar with the code than me can fix my
addition so that it will do only one display-level packet parse if
the user sets both a stop filter and a display filter.  I didn't
feel safe doing it since the code passed the compiled filter to
epan_dissect_prime_dfilter() *before* it calls epan_dissect_run().