Jaap Keuter wrote:
> On Thu, 20 May 2010 12:05:09 -0400, Jeff Morriss
>> This appeared in rev 7912 and it appears that the max # of files limit
>> was there originally because *ethereal kept the old files open so we
>> would (prior to that commit) run out of fds.
>>
>> Any reason not to just take this constant out and let users specify any
>> number?
>
> Any number would mean an array of keeping names of that size as well. And
> it's some sort of self protection, since not all file systems handle a
> kazillion files well. But what an appropriate limit would be, who knows?
According to
http://stackoverflow.com/questions/466521/how-many-files-in-a-directory-is-too-many
the lowest common denominator for commonly-used filesystems is FAT with
65535 files per directory. You can also run into performance problems on
ext3 if you don't have the "dir_index" option enabled. Short file name
generation on NTFS apparently has issues when you hit ~300,000 files:
http://technet.microsoft.com/en-us/library/cc781134%28WS.10%29.aspx
Of course, having a filesystem that handles a kajillion files doesn't do
much good if filename expansion blows up in your shell.
I think a value of 50000 or 65535 would make sense for
RINGBUFFER_MAX_NUM_FILES. We could also just print a warning like "Wow!
That's a lot of files!" instead of forcibly capping the value.
--
Join us for Sharkfest ’10! · Wireshark® Developer and User Conference
Stanford University, June 14-17 · http://www.cacetech.com/sharkfest.10/