Anders Broman wrote:
I don't think we can/should turn off canaries in se_ allocations.
Instead we should create a new canary-less allocator. (Not sure what
such a thing should
be named, of course...)
Well as I see it EP memory is not a problem we only use one chunk (10M)
During the life time of a packet so memory efficency isn't a big issue.
But when dealing with large files waisting +30% of the memory is not an
option I think.
True, especially at the rate we're using memory :-)
A way to still test se_alloc() could be to let the buildbot doing fuzz
test use canaries forinstance.
I don't see that a new allocator would solve the problem, when to use
it?
Well, you know, we could only use the canary allocator when we think we
might stomp on the memory. Hmmm, I guess that logic doesn't work too
well, huh? ;-)
My initial thought had been something like: only use the canary-less
allocator for stuff that we allocate a LOT of or core-stuff (and make
dissectors use the canaries on the assumption we trust them less). I'm
not sure that makes a lot of sense either, though...
I just keep thinking of a time a while ago where we saw quite a lot of
dissector bugs due to memory (canary) corruption. Apparently most of
those were ep_ allocations. Maybe you're right that just doing it on
the buildbot would work; keeping it on during development work would be
a good practice too.