I would like to calculate how much time the Client and the Server spend turning around frames.
Client ------- Switch ------- Server
|
|
sniffer
In this example, Client is using SMB to copy a file to Server.
I'm imagining that I can calculate the Server's contribution as follows:
tshark -r foo.pcap -Y tcp.srcport==445 -qz io,stat,0,SUM(tcp.time_delta)tcp.time_delta
================================================
| IO Statistics |
| |
| Interval size: 44.1 secs (dur) |
| Col 1: Frames and bytes |
| 2: SUM(tcp.time_delta)tcp.time_delta |
|----------------------------------------------|
| |1 |2 |
| Interval | Frames | Bytes | SUM |
|----------------------------------------------|
| 0.0 <> 44.1 | 50069 | 50551304 | 44.145992 |
================================================
And the Client's contribution in this way:
tshark -r foo.pcap -Y tcp.dstport==445 -qz io,stat,0,SUM(tcp.time_delta)tcp.time_delta
================================================
| IO Statistics |
| |
| Interval size: 44.1 secs (dur) |
| Col 1: Frames and bytes |
| 2: SUM(tcp.time_delta)tcp.time_delta |
|----------------------------------------------|
| |1 |2 |
| Interval | Frames | Bytes | SUM |
|----------------------------------------------|
| 0.0 <> 44.1 | 50069 | 50551304 | 44.145992 |
================================================
(1) Now, the fact that both incantations report precisely the same result seems suspicious to me ... particularly since using
an IO Graph gives me different results for the Server side calculation:
Filter: tcp.srcport==445 Calc:SUM(*)tcp.time_delta Style:FBar
I'm claiming that this is a bug ... and have filed it as such ... but now I'm doubting my understanding of how -z io,stat[...] works
==> Can anyone see an error in my approach? Or does this actually look like a bug?
[Screen shot of IO Graph approach inserted here]
(2) Does anyone have a better (or different) way of calculating the same thing, i.e. how much 'time' the Client and Server have each contributed?
--sk
Stuart Kendrick
FHCRC