-
Smalltalk
Parser, code model, interpreter and navigable browser for the original Xerox Smalltalk-80 v2 sources and virtual image file (by rochus-keller)
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
Why should it? It just does runtime analysis as usual and apparently generates efficient code. You can try yourself with my Smalltalk-80 interpreter: https://github.com/rochus-keller/Smalltalk#a-smalltalk-80-interpreted-virtual-machine-on-luajit; you can run it with and without enabled JIT (option -nojit) and compare the performance.
It doesn't know that it's an interpreter loop; it just makes a hotpath analysis and then makes decisions on what (variable lenght) traces to optimize and compile. Have a look at https://github.com/MethodicalAcceleratorDesign/MADdocs/blob/master/luajit/luajit-doc.pdf. Actually I'm just a LuaJIT user, not a specialist for tracing JITs. I just wanted to correct the wrong statement that "performance improvements for tracing JITs" "struggle to achieve their goals" with "Interpreters and similar programs that “look like” interpreters". If that was the case, PyPy would not be successful and I wasn't able to achieve the measured speed-up (which btw is higher than the speed-up measured at http://luajit.org/performance_x86.html).