This is the most naive version of string interning possible - we store a
map from the string itself to the pointer behind a global mutex, and
memoize the allocation of all strings below a threshold length (16
bytes, for now) into that map. This requires leaking /all/ strings,
since it's not easy to know just from the pointer that a string has been
interned - so interning is disabled if string leaking is also disabled.
In the case where we're leaking strings (the default), even the naive
version of this gets us a pretty nice perfomance boost:
hello outpath time: [742.54 ms 745.89 ms 749.14 ms]
change: [-2.8722% -2.0135% -1.0654%] (p = 0.00 < 0.05)
Performance has improved.
However, in the case where we're not leaking strings, we have to keep
track of which strings have and haven't been interned, which makes this
a little worse:
hello outpath time: [779.30 ms 792.82 ms 808.74 ms]
change: [+2.5258% +4.0884% +5.8931%] (p = 0.00 < 0.05)
Performance has regressed.
Hopefully we can close the gap here a bit with some clever
tricks (coming next).
Change-Id: If08cb48ede703c7fe3bdd8d617443f8a561ad09b
Reviewed-on: https://cl.tvl.fyi/c/depot/+/12047
Tested-by: BuildkiteCI
Reviewed-by: flokli <flokli@flokli.de>
Autosubmit: aspen <root@gws.fyi>