Wireshark-dev: Re: [Wireshark-dev] [Wireshark-users] tshark or dumpcap ring buffer limitations

From: Jeff Morriss <jeff.morriss.ws@xxxxxxxxx>
Date: Fri, 21 May 2010 17:02:19 -0400
Gerald Combs wrote:
Jaap Keuter wrote:
On Thu, 20 May 2010 12:05:09 -0400, Jeff Morriss
This appeared in rev 7912 and it appears that the max # of files limit was there originally because *ethereal kept the old files open so we would (prior to that commit) run out of fds.

Any reason not to just take this constant out and let users specify any number?
Any number would mean an array of keeping names of that size as well. And
it's some sort of self protection, since not all file systems handle a
kazillion files well. But what an appropriate limit would be, who knows?

We don't actually store all those files names, just an incrementing variable.

Sake has a point about the length in chars of that counter that we print, though the file names do also have the timestamp to keep them unique. So sorting would get ugly if we go over 100,000 files, but the file names won't collide.

Speaking of which, is it really useful to have the counter + the timestamp in the file names? Certainly sorting (e.g., when viewed through 'ls' or your file explorer) doesn't need the counter part.

According to

http://stackoverflow.com/questions/466521/how-many-files-in-a-directory-is-too-many

the lowest common denominator for commonly-used filesystems is FAT with
65535 files per directory. You can also run into performance problems on
ext3 if you don't have the "dir_index" option enabled. Short file name
generation on NTFS apparently has issues when you hit ~300,000 files:

http://technet.microsoft.com/en-us/library/cc781134%28WS.10%29.aspx

Of course, having a filesystem that handles a kajillion files doesn't do
much good if filename expansion blows up in your shell.


I think a value of 50000 or 65535 would make sense for
RINGBUFFER_MAX_NUM_FILES. We could also just print a warning like "Wow!
That's a lot of files!" instead of forcibly capping the value.

I would think that if we continue to support files:0 (unlimited files) then it makes more sense to just put out a warning. That would be better than forcing them to choose between unlimited (and possibly running out of disk space) and N files (which, as in this users case, wasn't enough).