This was discussed a little bit at Sharkfest.
I often need to analyse TCP streams where I only see part of a very long TCP session, not including the initial options where window scaling was set (but where I know the scaling because of other, very similar runs).
Without knowing the proper scaling factor, we still see Zero-Window coming from the receiving side, but no longer see the Window-Full detection on the sending side. We won't see the correct available window in TCP stream graphs.
The attached patch uses a TCP dissector preference to set a single scaling factor value to use for any flow where the signalled scaling option hasn't been captured. When applied, it says in the item for 'Window size scaling factor' that it is derived from a preference.
If the scaling option is dissected, this preference setting is ignored.
The default value for this preference is 'not-known', in which case current behaviour doesn't change.
I would welcome comments on the approach, and on the posted patch. If there are no strong objections I will submit in a few days.
Thanks,
Martin
Attachment:
pref_for_window_scaling.diff
Description: Binary data