On Thu, 20 May 2010 12:05:09 -0400, Jeff Morriss
<jeff.morriss.ws@xxxxxxxxx> wrote:
> [Redirecting to -dev for this question.]
>
> Jaap Keuter wrote:
>> On 05/19/2010 07:38 PM, Joseph Laibach wrote:
>>> All,
>>>
>>> I’m running a continuous capture of data. I’m trying to use a ring
>>> buffer of 25000 files with an 8mb file size. The problem is that the
>>> ring buffer starts overwriting after 10000 files. I’ve tried it with
>>> dumpcap and tshark. The command is using the –b files:25000 –b
>>> filesize:8192. Is there a limitation to the size of the ring buffer
for
>>> dumpcap and/or tshark?
>
> [...]
>
>> That's a fixed limit:
>>
>> jaap@host:~/src/wireshark/trunk$ grep RINGBUFFER_MAX_NUM_FILES *.h
>> ringbuffer.h:#define RINGBUFFER_MAX_NUM_FILES 10000
>
> Hmmm, actually, it's not: if you specify a value of 0 you get
> "unlimited" files. (I just tried it and killed dumpcap after it created
> 26,492 files.)
>
> Why have an "upper limit" at all if we also allow unlimited files?
Ehm, when specifying 0 it's not a circular buffer anymore.
> This appeared in rev 7912 and it appears that the max # of files limit
> was there originally because *ethereal kept the old files open so we
> would (prior to that commit) run out of fds.
>
> Any reason not to just take this constant out and let users specify any
> number?
Any number would mean an array of keeping names of that size as well. And
it's some sort of self protection, since not all file systems handle a
kazillion files well. But what an appropriate limit would be, who knows?
Thanks,
Jaap