Ethereal-users: Re: [Ethereal-users] tethereal performance questions
Note: This archive is from the project's previous web site, ethereal.com. This list is no longer active.
From: Joe Elliott <joe@xxxxxxxxx>
Date: Wed, 19 Oct 2005 09:39:34 -0700 (PDT)
Hello Guy, I did some extensive testing with packet generators and collectors on different platforms during a project our company did for NASA. The old belief that FreeBSD is the best collection platform compared to Linux is based on the old Linux 2.2 and 2.4 kernel. Using kernel 2.6 the differences I measured were negligable. Its sad that the early tests on 2.2/2.4 kernel based Linux that showed FreeBSD to be superior will haunt it to its grave. Linux 2.6 is as good as it gets. Make sure you dont run any other I/O intensive apps and renice your collector to the highest priority to get the best results. Joe. On Wed, 19 Oct 2005, Guy Harris wrote: > > his tests, PF_PACKET sockets dropped a *LOT* more packets than FreeBSD's > BPF, which dropped a *LOT* more packets than WinPcap (on the same > hardware). > -- __o _~o __o "Know your Network" `\<, `\<, `\<, ______________________________________(*)/_(*)__(*)/_(*)__(*)/_(*)________ Im a 21st Century Digital Boy ... I aint got a life, but I got lotsa toys. *************** Joe Elliott joe@xxxxxxxxx AOL:xqos ******************** - NetContExt - sniffer trace forensics - tcp follow stream analysis - - Extract data files and Images from tcpdump & ethereal packet payloads - Inetd.Com Network analysis solutions http://www.inetd.com -------------------------------------------------------------------------- On Wed, 19 Oct 2005, Guy Harris wrote: > Date: Wed, 19 Oct 2005 00:31:39 -0700 > From: Guy Harris <gharris@xxxxxxxxx> > Reply-To: Ethereal user support <ethereal-users@xxxxxxxxxxxx> > To: Ethereal user support <ethereal-users@xxxxxxxxxxxx> > Subject: Re: [Ethereal-users] tethereal performance questions > > Joe Elliott wrote: > > > If you have a dual processor machine you can spread the > > load across both cpu by doing: (tcpdump used in example) > > > > # tcpdump -i $ifName -w - -s $snapLen $filterCode | tcpdump -r -w $file > > > > This binds 1 CPU to do the expensive kernel to user space copy and 1 > > processor to do the decode/write to disk. > > Presumably by "decode" you mean "copying" - "tcpdump -w" does no packet > decoding whatsoever. > > Also, on what OSes does writing to a pipe and reading from that pipe not > involve a user-to-kernel copy for the write and a kernel-to-user copy > for the read? In > > tcpdump -i $ifName -w $file -s $snapLen $filterCode > > I see one kernel-to-user copy from the packet capture mechanism (unless > it's using some memory-mapped mechanism) and one user-to-kernel copy for > writing the file (unless the buffers are page-aligned and the write is > done by page-flipping), while in the pipeline I see an additional > kernel-to-user and user-to-kernel copy in the second process. > > Perhaps what it's doing is running the capture effort and the file > system writing effort on separate CPUs, which, on an MP server, might > get you enough more parallelism (and perhaps enough less latency, which > might be what really matters here) to more than compensate for the extra > copy. (Your mileage may vary significantly on a multi-threaded processor.) > > (In that case, it might be interesting to see whether a multi-threaded > capture program - which might be simpler than tcpdump, as all it'd do > would be capture packets and write them - would do better, by avoiding > the extra copies for the pipe. I don't know whether having the two > processors' caches both accessing the data would make a difference, > although the same issue might also come up for the pipe data, depending > on how clever the kernel is about that.) > > > Finally look at some of the ring buffer techniques for libpcap that are > > becoming more popular. This is the final step. PF_RING etc. > > Yes, at least some of the problem might be with Linux PF_PACKET sockets > and the socket code, as per some of Luca Deri's papers: > > http://luca.ntop.org/ > > and, in particular: > > http://luca.ntop.org/Ring.pdf > > which is the paper describing PF_RING, and which notes that, at least in > his tests, PF_PACKET sockets dropped a *LOT* more packets than FreeBSD's > BPF, which dropped a *LOT* more packets than WinPcap (on the same hardware). > > _______________________________________________ > Ethereal-users mailing list > Ethereal-users@xxxxxxxxxxxx > http://www.ethereal.com/mailman/listinfo/ethereal-users >
- Follow-Ups:
- Re: [Ethereal-users] tethereal performance questions
- From: Guy Harris
- Re: [Ethereal-users] tethereal performance questions
- References:
- Re: [Ethereal-users] tethereal performance questions
- From: Guy Harris
- Re: [Ethereal-users] tethereal performance questions
- Prev by Date: Re: [Ethereal-users] default black colorization on ethereal packets
- Next by Date: Re: [Ethereal-users] tethereal performance questions
- Previous by thread: Re: [Ethereal-users] tethereal performance questions
- Next by thread: Re: [Ethereal-users] tethereal performance questions
- Index(es):