On Sat, Dec 15, 2001 at 07:01:09PM -0600, McNutt, Justin M. wrote:
> The specific problem is that when I do a capture on the RedHat box on an
> 802.1Q link and then try to use 'ip.addr eq a.b.c.d' (display filter),
> Ethereal aborts. Transfer the capture file to a Slackware box and do the
> same display filter (same version of Ethereal) and it works fine.
>
> I don't expect some magic answer. :-) What other information is pertinent
> here?
A stack trace from gdb, when run on the Ethereal binary and core dump in
question. (This might be due to some shared library problem on RH 7.2;
we'd have to see what routine it blew up in to figure that out.)
> There was another bug in libpcap 0.6.2 that caused packet filters to be
> applied only after a certain amount of time (<1s) that causes some weirdness
> on a highly-utilized link. Has that bug been repaired?
*If* that's the bug it sounds like (which is actually not as described -
the problem is that packets arrive, and are queued up for input, before
the socket filter is put onto the socket, and that on Linux, changing
the socket filter on a socket *doesn't* discard data currently on the
socket; on the BSDs, for example, changing the filter on a BPF device
discards data queued up and unread), then it's fixed in the 0.7 beta
(the fix was to put code into libpcap to discard that data).
> > As the tcpdump 3.6 man page says, "Note that the first vlan keyword
> > encountered in expression changes the decoding offsets for
> > the remainder
> > of expression on the assumption that the packet is a VLAN packet". If
> > you *don't* put a "vlan" keyword into the expression, the remainder of
> > the expression will assume that the packets are ordinary Ethernet
> > packets, *not* VLAN packets.
>
> Hmmm... So what would be the syntax for "vlan any"?
"vlan" by itself.