Sebastien Tandel wrote:
OK. Do you have a mean to distinguish between mip captures from the
previous norm and the new ones? Or is it really useless?
After the change, this should correctly dissect packets even if they
were formatted as per the older version of the spec.
With the older code, you have (in sequential byte order), two 1-byte
entities:
+--------+--------+
| flags | rsvd |
+--------+--------+
The dissector will fetch the first byte, put it into the protocol tree,
and then put the bits of that byte, starting at the top-level bit, into
the protocol tree under that byte.
With the newer code, you have, again in sequential order, 2 bytes:
+--------+--------+
| flags1 | flags2 |
+--------+--------+
The dissector will fetch that as a two-byte big-endian quantity, so that
it turns into
+----------+------+
| flags | rsvd |
+----------+------+
within a 16-bit word. The dissector will then put that 16-bit value
into the protocol tree, and then put the bits of the value, starting at
the top-level bit, into the protocol tree under that 16-bit value.
For a packet formatted as per the older version of the spec, the bottom
two bits of the 10-bit flags field are 0, so the only difference between
the old dissector and the new dissector is that
1) the top-level field will be 2 bytes long, rather than 1 byte long,
and there won't be a byte in the packet for which there's no field in
the protocol tree;
2) the individual flags will be shown with 16 bit positions, rather
than with 8 bit positions;
3) there will be two additional fields shown, which will be clear (for
packets formatted as per the older version of the spec, where those two
bits were reserved).
So no mechanism is needed to determine which version of the standard is
being used - the new version is backwards-compatible with the old version.