Hi,
I think it was Guy who mentioned that we could reduce the memory
consumption by not storing the
fragments in memory but rather save the file pointers and reread that
data when it's needed.
caveat it will not work(well?) for compressed files.
As the handling large files is a frequent topic on the lists I think it
would be interesting to at least
make a prof of concept try if not overly complicated. I think the design
steps should be something like:
- Store the file pointer in the topmost TVBUFF_REAL_DATA changing the
type to something like
TVBUFF_REAL_DATA_FROM_FILE to differentiating it from tvb:s not
constructed from file data.
- When making sub tvb they'd have the type TVBUFF_SUBSET_FROM_FILE
- The reassembly routines should then be changed to not store the
fragments but rather the file offset
and length when all fragments are available and needs to be presented
that data is read from file and
stuffed into a "reassembled data tvb". Possibly data should be kept
around until the final packet in a reassembly sequence on the first pass
for speed.
Did I miss something? Feasible?
If a prototype is made and yields good results we could consider
changing the handling of compressed files to uncompress them to a temp
file before reading them in.
Regards
Anders