Wireshark-dev: Re: [Wireshark-dev] tshark: drop features "dump to stdout" and "read filter" - c

From: Jeff Morriss <jeff.morriss.ws@xxxxxxxxx>
Date: Wed, 10 Oct 2007 12:36:08 -0400


Ulf Lamping wrote:
Packets should be lost going from the kernel up to dumpcap, not between dumpcap and *shark (unless I miss something: normally I would expect that writing to a full pipe results in your write blocking, not message disposal). So how is that different then the old model where *shark only read stuff from the kernel as fast as it could?

You are completely ignoring that this mechanism is really time critical and waiting for tshark to complete it's task won't make it better than having only dumpcap alone in the "critical capture path".

Ummm, no, I'm not. The paragraph above says packet loss can/will be seen, I was just clarifying where it would happen.

What happens with the increasing number of packets in the kernel buffers, if dumpcap is blocking on a write call to the pipe and therefore dumpcap won't fetch any packets from the kernel? After a short time the kernel buffers will get full and the kernel will drop packets as dumpcap is still waiting for tshark to complete.

Packets will be dropped.

But is that the end of the world? If you're monitoring such high traffic rates you should be using dumpcap directly and analyzing off line.

The "temporary file model" is working in Wiresharks "update list of packets" mode for quite a while and is working ok.
Except (unless my idea about that problem is incorrect) when you're using a ring buffer (see bug 1650).

I see two ways of solving that problem:

- keep dumpcap and *shark synchronized all the time (for example if a
   pipe was used between the two to transfer the packets)
	- if *shark can't keep up then packets will be lost but _when_
	  they get lost is really dependent on when *shark is too slow

Now you have two tasks that must process the packets in realtime instead of one - which is very certainly a bad idea if you want to prevent packet drops.

Sure, but why are you analyzing in real time if your goal is no packet drops?

On a single CPU (or core) system if *shark is CPU bound processing the packets then it would still deprive dumpcap of CPU time leading to packet drops.

- have dumpcap and *shark synchronize only when changing files
	- in this case dumpcap would be fast up until changing files at
	  which point it might block for a potentially huge amount of
	  time (while *shark catches up).  In this case all the packet
	  loss would happen in "bursts" at file change time.  That seems
	  rather unattractive to me.

Another method would be to have dumpcap create all the ring buffer files and to have *shark delete them (when it has finished with them). That would avoid the problem but it defeats the (common) purpose of using the ring buffer which is to avoid going over some specified amount of disk usage (because dumpcap could go off and create hundreds of files while *shark is still busy processing the first ones).


BTW: Bug 1650 summarizes as following: If the rate of incoming packets is higher than what Wireshark/tshark can process, every model with somehow limited space (e.g. ringbuffer files) must fail sooner or later.

... and when that happens (currently) the user gets an error that looks (to even a Wireshark developer) like a bug. I'm surprised that you of all the developers would accept to leave that mode of failure in there.