On Thu, Apr 22, 2010 at 10:47:14AM -0400, Ed Beroset wrote:
> I may have missed it, but have we measured to see if this is worth optimizing in the first place?
I have some old callgrind log, where match_strval (for 58736 frames) is called a lot:
from dissect_ieee80211 58,7k
from ieee_80211_add_tagged_parameters 228k
It's problem with ieee80211 dissector, but when you grep dissector sources:
#v+
$ grep -Ir 'val_to_str' ./ | wc -l
3621
$ grep -Ir 'match_strval' ./ | wc -l
305
#v-
It's a common thing to use these functions.
> If not, I think I'd favor choosing code
Come on, this code require only adding:
static const VALUE_STRING_FAST(some_value_string_array);
I don't see much difference between using:
value_to_str_fast(..., &foo_fast) and value_to_str(..., foo)
If you don't like automatic variable naming I can replace VALUE_STRING_FAST with:
#define VALUE_STRING_FAST_INIT(x) = { array_length(x)-1, x }
static const value_string_fast foo = VALUE_STRING_FAST_INIT(bar);
Dissector maintainer is free to use old/new (or write his own) function he want,
but putting 57 new lines to core won't hurt.
> clarity over an inconsequential speedup.
ATM I don't have working profiler to benchmark wireshark, but I made some
_raw_ benchmark [1], for 5k entries and searching 0 to 6k three times
Average time for 16 tests:
val_to_str_fast: 0.02s
val_to_str: 1.56s
Sorry for not providing real-life benchmark.
Cheers.
[1] http://szara.chmurka.net/value_string-bench.c