Ethereal-dev: Re: [Ethereal-dev] packets not desegmented if not on the default port..

Note: This archive is from the project's previous web site, ethereal.com. This list is no longer active.

From: Guy Harris <guy@xxxxxxxxxx>
Date: Mon, 10 Feb 2003 13:22:20 -0800
On Mon, Feb 10, 2003 at 09:10:13PM +0000, didier wrote:
> You're right, just did it too. The capture is from an OSX Mac and it 
> doesn't desegment with a modified DSI port too.
> I've overlooked it but each frame have 4 bytes trailing.

Are they supplying the CRC?  That's nice, BUT IT'D BE NICER IF THERE
WERE SOME WAY FOR BPF TO INDICATE WHETHER IT'S SUPPLYING A CRC OR NOT -
I seem to remember reading that the CRC is supplied, at least on
Ethernet, on NetBSD as well.

> Maybe it's the reason?

Probably not - it should just show up as a trailer.

However:

	1) have you turned on both DSI *and* TCP desegmentation?

> It's captured with ethereal on OSX, version unknown.

	2) do the frames have bad TCP checksums?

Gigabit Ethernet interfaces (or even interfaces capable of gigabit
speeds, even if they're not running at gigabit speeds) might do TCP
checksum offloading, and some non-gigabit interfaces might do so as
well; if so, then outgoing TCP segments delivered to the packet capture
mechanism might not yet have had their checksum computed, and thus might
have bad TCP checksums.

This definitely happens on Solaris/SPARC with Sun's GEM gigabit
interface, so I put in a TCP preference to turn off checksum checking -
it doesn't check the checksums, so it neither reports errors nor
disables reassembly if the checksum is bad.

I think this may happen with at least some Apple Ethernet interfaces; if
packets have bad TCP checksums, see whether turning off checksum
checking causes desegmentation to happen.