| From: Guy Harris
|
| On Dec 1, 2003, at 11:09 AM, Biot Olivier wrote:
|
| > | The problem with any scheme that does stuff when a row is made
| > visible
| > | is that it'd require that the packet be dissected when its
| > | row is drawn
| > | - which would require that dissection to happen
| reasonably quickly,
| > | which would, in turn, require that the data for the packet be read
| > | reasonably quickly, meaning random access to the capture file be
| > | reasonably fast.
| >
| > Ah. That's the major problem I think.
|
| That's the major problem - another potential problem is that
| "g_node_append()" is, I think, linear in the length of the list to
| which it's appending, which means that there's some N^2
| behavior; this
| shows up in some very big packets that take many many seconds to be
| disssected.
I also suspect the conversation stuff is superlinear in time, explaining why
captures with many conversations take ages to get read and parsed. Finally
large value_string[] entries may also considerably slow down dissection, but
gprof output puts this only at the 50th to 60th place with the test capture
results regarding CPU consumption.
Should we think at building balanced trees instead of linearly linked lists
for big data constructs? Or am I wandering off the road here?
| > Maybe we can save the decoding dictionary in a temporary
| file at the
| > time we
| > read the file?
|
| What is the "decoding dictionary"? Do you mean the protocol
| trees for
| the packets?
I did not think at that initially; I first thought at packet start and
length information from the packet, but for the compressed files I meant the
intermediate states of the decompressor ("key frames") from where the
decompression can be resumed without knowledge of the prior data.
Regards,
Olivier