i think significant parts of the utilized memory is memory allocated by wireshark itself for things like reassembly and state management of protocols.
since most se allocation is done from within a dissector
once more/most of this is converted to use emem allocators we could maybe do something like :
1, set an upper limit on how much memory we allow to be allocated by the se allocator
2, when se_alloc is called and we have reached this limit, just cause a new exception MemError
to cause dissection of the packet to be aborted but allow wireshark to continue.
this might actually work
On 8/25/06, Jeff Morriss <jeff.morriss@xxxxxxxxxxx> wrote:
Ravi Kondamuru wrote:
>
> Thanks for the wiki link.
>
> In the workarounds highlighed, I feel that point 3 (Split the capture
> file into several smaller ones) can be made more appealing by
> programatically limiting the amount of data (packets/ memory consumed/
> load time) wireshark already read/ used.
>
> Wireshark does something similar when a large file is selected in the
> "Select a capture file" dialog box when opening a file. After 3 secs
> (prefs: file_open_preview_timeout) of reading a file, it stops reading
> further and displays "more than xyz packets (preview timeout)".
>
> My point being, can the same approach be taken with large files during
> the actual display?
>
> An option will let the user make wireshark parse the subsequent or
> previous packets till a timeout happens again. An option will let users
> to make wireshark read the complete file before display. How much to
> read at a time can be determined as mentioned earlier on one of 1)
> number of packets read, 2) memory consumed so far or 3) amount of time
> spent reading.
>
> Please mail, if you guys think of any issues that might make this
> approach not worth pursuing.
I think the problem with this approach is that it's difficult to know
[at least in a cross-platform manner that works on all the platforms
Wireshark runs on] when you're going to run out of memory until you
actually have run out of memory (and malloc() fails). As mentioned in
the Wiki, Wireshark and (more importantly as it's a bigger job to
change) some of the libraries Wireshark uses simply call abort() when
malloc() fails.
-J
> On 8/22/06, *Jeff Morriss* <jeff.morriss@xxxxxxxxxxx
> <mailto:
jeff.morriss@xxxxxxxxxxx>> wrote:
>
>
>
> Guy Harris wrote:
> > Ravi Kondamuru wrote:
> >
> >> My question:
> >> Is there a known limit on the number of packets that wireshark
> can deal
> >> with in a single file?
> >
> > The number of packets that Wireshark (or, I suspect, any network
> > analyzer) can deal with is limited; due to a number of factors,
> the GUI
> > widget used to implement the packet list display being one of
> them (it
> > allocates a string for the text value in every column, which eats
> a lot
> > of memory), Wireshark's limit might be lower than some other
> analyzers.
> >
> > This is not a limit saying something such as "Wireshark can't
> read more
> > than 1,227,399 packets"; the point at which it'd run out of memory
> > depends on the contents of the packets.
>
> See this page for more info:
>
> http://wiki.wireshark.org/KnownBugs/OutOfMemory
>
> _______________________________________________
> Wireshark-dev mailing list
> Wireshark-dev@xxxxxxxxxxxxx <mailto:Wireshark-dev@xxxxxxxxxxxxx>
>
http://www.wireshark.org/mailman/listinfo/wireshark-dev
> <http://www.wireshark.org/mailman/listinfo/wireshark-dev>
>
>
>
> ------------------------------------------------------------------------
>
> _______________________________________________
> Wireshark-dev mailing list
>
Wireshark-dev@xxxxxxxxxxxxx
> http://www.wireshark.org/mailman/listinfo/wireshark-dev
_______________________________________________
Wireshark-dev mailing list
Wireshark-dev@xxxxxxxxxxxxx
http://www.wireshark.org/mailman/listinfo/wireshark-dev