On Mon, Aug 18, 2014 at 5:06 PM, Jeff Morriss <jeff.morriss.ws@xxxxxxxxx> wrote:
> On 08/18/14 16:45, Evan Huus wrote:
>>
>> On Mon, Aug 18, 2014 at 4:31 PM, Guy Harris <guy@xxxxxxxxxxxx> wrote:
>>>
>>>
>>> On Aug 18, 2014, at 12:46 PM, Evan Huus <eapache@xxxxxxxxx> wrote:
>>>
>>>> Guy, how are you finding these last four or five API abuses? Do you
>>>> have some sort of super-checkAPIs or are you just doing a lot of
>>>> manual code review?
>>>
>>>
>>> No, and not exactly.
>>>
>>> I have my regression script, which I was using to check whether I'd
>>> broken anything with the X11 changes; it runs two versions of tshark against
>>> a file, and compares the results. It runs against a big collection of
>>> captures, including the menagerie used for fuzz testing.
>>>
>>> It *also* captures the standard error of tshark in both cases, and
>>> reports it regardless of whether it's different or not, so it catches
>>> dissector bug messages.
>>
>>
>> Hmm - should the fuzz script raise an error when it detects anything
>> on stderr? We'd probably catch a lot of things that way.
>
>
> Given that I don't remember the last time I saw the buildbot waterfall show
> that the fuzz bot ran to completion I'd say that we shouldn't go doing that
> quite yet.
Eh, we make it about half-way through the menagerie on valgrind now
and the issues it's been finding are all real :)
Granted though, we'd probably get quite a flood if we turned that on
the fuzz-bot proper.
> (There is a check for dissector bugs in the script but it's commented out;
> maybe it should be a command-line option so the buildbot can not do the
> check but developers can--e.g., when testing new code?)
+1
I'd even be tempted to have it set by default and turn it off on the
fuzz-bot, so that we don't miss assertions when testing code locally.
(Also the test-captures.sh script probably needs something similar).