On Sat, Mar 9, 2013 at 3:48 PM, Jakub Zawadzki
<darkjames-ws@xxxxxxxxxxxx> wrote:
> Hi Evan,
>
>> - We weren't doing anything with the emem slab that couldn't be done with glib slices.
>
> Right now no, but there was implementation of sl_free_all().
Granted, but I don't (off the top of my head) have a potential use case for it.
>> - Glib slices are much cache-friendlier
>
> Do you have any benchmarks/ cachegrind output to proof it?
No, however the Glib documentation mentions several tricks that the
slices use for improving cache-friendliness. Regardless, without proof
the other way (that emem slabs were an improvement), the less code we
own the better.
> It's quite normal use to iterate over nodes, in old code we tried to allocate
> all proto_tree nodes in one page. I doubt if glib slice allocator can do better.
Possibly not for that specific case. I haven't dug into the particular
usage patterns there.
> If you're talking about something else please specify.
>
>> - multi-threading friendly
>
> True, but I have patch to fix it (when needed).
> Generally we don't need it to be truly MT-friendly, just make single slab for single edt.
>
>> Allows glib to actually return slices to the OS on occasion. The emem
>> slab would hold onto its memory forever which resulted in a great deal of
>> wasted memory after closing a large file.
>
> False, it was currently used for fvalue_t, item_label_t, proto_node, which
> generally are allocated for one or two frames
> (generally number of currently running epan_disset with tree).
Don't really follow this. The memory blocks being allocated were
definitely never returned to the OS. If you're saying that there were
never very many allocated at once (so the waste was minimal), then
that's fine but:
- that's another reason it wasn't worth the effort to optimize
- there's still at least one slab that was being wasted, possibly more
if you had a packet with a large number of tree items
> Generally I'm fine with removing this code, but I dislike when someone
> criticize my code without proofs :P
I didn't intend it as a criticism - if you read it that way I
apologize. I've been digging around to try and figure out why, when
opening and closing a large file, Wireshark's memory usage doesn't
comes back down. I identified this as one spot where we weren't
freeing our memory, so I looked at trying to do so. When I realized
that I could simplify it significantly and get our memory freed at the
same time I was perhaps a little over-eager.
(Tangent: I know that the majority of the held memory is emem not
freeing the big blocks that it allocates, but wmem fixes this already
so I've been looking at other instances).
> (It was me, who changed some SLAB_* use to sl_ in r37030)
>
> Speaking of which there's still SLAB_ code in packet-tcp.c.
I honestly didn't even know we had *another* slab implementation. Its
origins appear to be lost in the mists of time (the farthest back I've
traced it is 2003, when it was embedded in proto.c) but in the same
vein I would like to replace it with g_slice if there are no
objections. Glib didn't have a slice/slab allocator way back then, so
it probably made sense at the time to roll our own.
---
Again, I apologize, I didn't intend to criticize your work. There was
nothing wrong with the code, I just don't think it's necessary now
that glib provides a slice/slab allocator. If glib ever starts
providing scoped memory then you're welcome to remove wmem :)
Evan