Greetings,
I cannot understand how the TCP throughput graph is created by
Wireshark. I have done the following:
1. Start packet capture.
2. Start a single web download on a 2 Mbit/s link.
(The transfer was stable at 230 kB/s.)
3. Stop packet capture.
4. Filter the packets so that I only get the receiving side's packets
from a one second long fragment of the transmission.
5. Save the filtered packets to a separate PCAP file.
When I open that file in Wireshark, the summary shows that the file
contains 170 frames, each 1514 bytes long, which translates to 170 *
1460 = 248200 bytes of raw TCP payload. That means the effective
transfer rate was around 242 kB/s. (That's inconsistent with what the
download application was showing, but read on.)
When I view the TCP throughput graph, most of the graph oscillates
around 235000 bytes per second, which is around 230 kB/s - exactly what
the download application was showing. But how can this be? Why does the
graphed transfer rate differ by over 10 kB/s from a simple calculation?
I've read a thread from a while back
(http://www.wireshark.org/lists/wireshark-users/200701/msg00024.html)
and when I calculate the throughput manually using the method described
there, it's still inconsistent with what the graph is showing. What am I
missing here?
I attach the PCAP file in question. Thanks in advance for any tips,
--
Best regards,
Michal Kepien
Attachment:
1second.pcap
Description: Binary data