Comment # 26
on bug 9586
from Guy Harris
(In reply to comment #25)
> Now the 'TSF Timestamp' is representing the entire 64-bit get time as a
> whole. I can understand that the new dissection is fetching it as an 8-byte
> field in microseconds. But we actually process that field: first for a
> 32-bit representation of 'seconds' and next 32-bit representation of
> 'microseconds'.
You do?
How, then, do you explain that, as Joerg said in comment 4:
> The decoding of the legacy header seems to be incorrect wrt timesec/timeusec.
> My legacy trace spans ~180s but the seconds value remains at 4. My best guess
> might be a 3 byte seconds and a 3 bytes useconds value - but even that didn't
> fit too well.
and, as I said in comment 5:
> ...and, in the capture you made available, the "microseconds" value is >
> 1,000,000.
so that I said in comment 12:
> At least in the captures Joerg and I have seen, treating that field as a 4-byte
> seconds field and a 4-byte microseconds field does *NOT* work - the 4-byte
> "microseconds" field is more than 1 million in one capture, and Joerg saw a
> trace that covered more than 180 seconds but the 4-byte "seconds" value did
> not change.
You are receiving this mail because:
- You are watching all bug changes.