-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
This sounds like of like Halide [1] which I think is also from the same group in MIT. However, Halide is an embedded DSL in C++. It sounds like ATL being based on Coq would be more likely to produce correct code.
[1] https://halide-lang.org/
There is also this:
https://anydsl.github.io/
They have some framework that achieves high level compute!
Its unfortunate that ATL is such a generic name. The actual code you meant to link to is here:
https://github.com/ChezJrk/verified-scheduling
The only language I know for sure to do it for you (as in you don't have to write the type) was Jai a while back (I'm told Blow removed that feature).
The only language I've actually done it in, is D. It's probably doable in many other nu-C languages these days, but D at very least can make it basically seamless as long as you do some try-and-break-shit testing to make sure nothing is relying on saving pointers when they shouldn't. This obviously constrains the definition of automatic ;)
I don't have my implementation to hand because it grew out of patch that failed due to aforementioned pointer-saving in code that I'm not paid enough to refactor (), but here's one someone else made https://github.com/nordlow/phobos-next/blob/master/src/nxt/s... there's another one in that repository too. I've never used those particular implementations but they're both by people I know so hopefully they're not too bad.
A more subtle thing, which I haven't used in anger, but would like to try at some point is to use programmer annotations (probably in the form of user defined attributes) to try and group things so things are stored such that temporal locality <=> spacial locality, but I've never bothered to actually do it.
There are some arrays of structs in an old bit of the D compiler that are roughly the size of a cacheline, and aren't accessed particularly uniformly. I profiled this and found that something like 75% of all LLC misses (hitting DRAM) were due to 2 particularly miserable lines... inside an O(n^2) algorithm.