Arik Dasen wrote:
Hi all
I'm piping the tethereal output (packet summary) to my app. The
documentation says that I can use the '-l' option to force a flush, so
that I have the packet immediatly after dissecting. But this doesn't
seem to work. When I timestamp the output in my app I noticed that I get
a bunch of packets roughly every 1s...
On what operating system are you running Tethereal?
On some operating systems, the packet capture mechanism provided by the
OS, and used by libpcap (the library Ethereal and Tethereal use to
capture network traffic) supports "batching" of packets, so that, in a
single system call to retrieve packets from that mechanism, a large
batch with multiple packets can be read, rather than just one packet.
This reduces the CPU overhead for heavy network traffic.
The batching mechanism includes a timeout; if the buffer for the batch
fills, *or* the timeout expires, the batch is delivered to the application.
In some OSes, the timer starts as soon as a read is done, so that the
timer could expire even if no packets have arrived; in other OSes, the
timer starts when the first packet arrives, so at least one packet is
delivered.
In either case, if not a lot of traffic is being captured, there could
be a delay equal to the timeout (on OSes of the first type) or greater
than the timeout (on OSes of the second type) before a packet is seen.
The timeout Tethereal uses is...
...1000 milliseconds (those are the units of the timeout in libpcap), or
1 second.
The BSDs (FreeBSD, NetBSD, OpenBSD, DragonFlyBSD, BSD/OS, Mac OS
X/Darwin), and Windows when WinPcap is used, are OSes of the first type;
Solaris is an OS of the second type. Digital/Tru64 UNIX is one of those
types, but I don't remember which.
If an API were added to libpcap to support changing the timeout (I think
most of those OSes support support that, although I'm not sure whether
it'd work in Windows with WinPcap), Tethereal (and tcpdump, which also
uses a 1-second timeout), or libpcap, could, in theory, adjust the
timeout based on how many packets arrive per batch, so that if traffic
is arriving slowly, a short timeout would be used, and if it's arriving
fast a larger timeout would be used. Doing this right, so that you
don't, for example, make the timeout oscillate (requiring a system call
for *every* timeout change), or dropping packets because the timeout is
too short and you spend too much time making system calls to read
buffers without many packets, would involve more work, however, so this
is unlikely to happen any time soon unless somebody who wants it and
knows how to make it work does so and contributes code to do so.