Comment # 8
on bug 10068
from Guy Harris
The file size and samples might not matter if the small file, using 1s samples,
is big enough to cause problems; if you're reading the larger file, or if
you're doing 0.1s samples, the system will wedge up after reading a smaller
fraction of the file, but it'll still wedge up.
The items being allocated are io_stat_item_t's, which are 80 bytes long on an
LP64 platform, so that's 80 bytes (plus malloc overhead, if any) per sample.
Unless there's a *ton* of malloc overhead, if reading the 10MB file with 1s
samples allocates a total of 80GB, that's over a billion seconds, which seems
unlikely (and I suspect it doesn't allocate one for sample time periods with no
packets).
You are receiving this mail because:
- You are watching all bug changes.