-
DiffEqOperators.jl
Discontinued Linear operators for discretizations of differential equations and scientific machine learning (SciML)
I like that they are colored now, but really what needs to be added is type parameter collapasing. In most cases, you want to see `::Dual{...}`, i.e. "it's a dual number", not `::Dual{typeof(ODESolution{sfjeoisjfsfsjslikj},sfsef,sefs}` (these can literally get to 3000 characters long). As an example of this, see the stacktraces in something like https://github.com/SciML/DiffEqOperators.jl/issues/419 . The thing is that it gives back more type information than the strictest dispatch: no function is dispatching off of that first 3000 character type parameter, so you know that printing that chunk of information is actually not informative to any method decisions. Automated type abbreviations could take that heuristic and chop out a lot of the cruft.
-
InfluxDB
Purpose built for real-time analytics at any scale. InfluxDB Platform is powered by columnar analytics, optimized for cost-efficient storage, and built with open data standards.
-
Julia does have a minimal compilation path with an interpreter. You can even configure this on a per-module basis, which I believe some of the plotting packages do to reduce latency. There is even a JIT-style dynamic compiler which works similarly to the VMs you listed: https://github.com/tisztamo/Catwalk.jl/.
IMO, the bigger issue is one of predictability and control. Some users may not care about latency at all, whereas others have it as a primary concern. JS and related runtimes don't give you much control over when optimization and are thus black boxes, whereas Julia has known semantics around it. I think fine-grained tools to externally control optimization behaviour for certain modules (in addition to the current global CLI options and per-package opt-ins) would go a long way towards addressing this.
-
You're right, done some more research and there seems to be an interpreter in the compiler: https://github.com/JuliaDebug/JuliaInterpreter.jl. It's only enabled by putting an annotation, and is mainly used for the debugger, but it's still there.
Still, it still seems to try executing the internal SSA IR in its raw form (which is more geared towards compiling rather than dynamic execution in a VM). I was talking more towards a conventional bytecode interpreter (which you can optimize the hell out of it like LuaJIT did). A bytecode format that is carefully designed for fast execution (in either a stack-based or register-based VM) would be much better for interpreters, but I'm not sure if Julia's language semantics / object model can allow it. Maybe some intelligent people out there can make the whole thing work, is what I was trying to say.
-
Thanks for response.
> Do you have a link to an open issue tracking this problem?
Yes [#41642](https://github.com/JuliaLang/julia/issues/41642)
> The main reason for the divergence is that Julia’s libuv fork has a significantly more flexible event loop model that allows using libuv from multiple threads efficiently.
Glad to hear this. That's impressive.
-
You can expose it as a CLI tool if you wish: https://github.com/fredrikekre/jlpkg
-
>The experience was not that my program became more safe in the sense that I could ship it without sweat on my brow. No, it was that it just worked, and I could completely skip the entire debugging process that is core to the development experience of Julia, because I had gotten all the errors at compile time.
>And this was for small scripts. I can only imagine the productivity boots that static analysis gives you for larger projects when you can safely refactor, because you know immediately if you do something wrong.
Getting the program right the first time can't _possibly_ apply to programs larger than a small script. Type errors are only 2% of all bugs in large programs with companies behind them. When a big Rust program fails, then you have to contend with the fact that you can't even set a debugger checkpoint where an error occurs:
https://github.com/rust-lang/rust/issues/54144
Obviously people wouldn't be asking for that if the type and borrow checkers caught all the bugs at compile time. Not even close!
And not only can't you debug errors, you also can't change the program at all. All you can do is terminate the program, generate a new one from modified source code, and run the new program. I hope that bug you were working on is simple enough that you can reproduce it with a unit test!
LOL Rust problems!
Now if only Rust had resumable exception handling, could be incrementally compiled, allowed function and type redefinitions at runtime, and wasn't designed as if we were still in the punched card era but with files instead of cards. That would give you far more of a productivity boost than a type checker.