I was looking to understand the Round Trip Time graph and why it
seems to jump up and down between near 0 and 270ms. That doesn't make
sense to me -- first I don't see how some of them would have an
RTT time of near 0 -- I don't see how that would be possible, so
I figure I don't understand how to read the graph.
Also, I don't see why the RTT would jump up and down and why there
are "gaps" in the graph like between 45-85 seconds, vs. almost a
solid-like appearance between 380-410s.
Here is the RTT and througput graphs I'm trying to decipher:
https://i.imgur.com/4ijLxTJ.jpg
It looks like I have a relatively low latency when the graph
peaks at around 150ms, but then something causes a jump so that
latency climbs to over 250ms.
It also seems to be the case where I'm getting low latency that
my throughput peaks with average packet length falling from 1500
down to <100bytes.
I don't see any clear errors. or why there is such a sudden drop
Should I be looking for some type of dropped packets or errors?
Could this be cause by my ISP cutting bandwidth in a step-wise
manner as a means to control? Or could this be some sort of
buffer-bloat with some buffer filling up and something halting
output to wait for some buffers to drain...??
Another possibility is the application on my end is running on a
high speed internal net with a 9k jumbo frame size -- could the
mismatch between that the external frame size of 1.5k be causing
some type of hysteresis?
Any ideas on how, if it is possible I might even this out?
It sorta wreaks havok with the local application...
Thanks!