-
MuladdMacro.jl
This package contains a macro for converting expressions to use muladd calls and fused-multiply-add (FMA) operations for high-performance in the SciML scientific machine learning ecosystem
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
The Julia package ecosystem has a lot of safeguards against silent incorrect behavior like this. For example, if you try to add a package binary build which would use fast math flags, it will throw an error and tell you to repent:
https://github.com/JuliaPackaging/BinaryBuilderBase.jl/blob/...
In user codes you can do `@fastmath`, but it's at the semantic level so it will change `sin` to `sin_fast` but not recurse down into other people's functions, because at that point you're just asking for trouble. In summary, "Fastmath" is overused and many times people actually want other optimizations (automatic FMA), and people really need to stop throwing global changes around willy-nilly, and programming languages need to force people to avoid such global issues both semantically and within its package ecosystems norms.
But if what you want is automatic FMA, then why carry along every other possible behavior with it? Just because you want FMA, suddenly NaNs are turned into Infs, subnormal numbers go to zero, handling of sin(x) at small values is inaccurate, etc? To me that's painting numerical handling in way too broad of strokes. FMA also only increases numerical accuracy, it doesn't decrease numerical accuracy, so bundling it with unsafe transformations makes one uncertain now whether it has improved or decreased accuracy.
For reference, to handle this well we use MuladdMacro.jl which is a semantic transformation that turns x*y+z into muladd expressions, and it does not recurse into functions so it does not change the definitions of the callers inside of the macro scope.
https://github.com/SciML/MuladdMacro.jl
This is something that will always increase performance and accuracy (performance because muladd in Julia is an FMA that is only applied if hardware FMA exists, effectively never resorting to a software FMA emulation) because it's targeted to do only a transformation that has that property.
Here is a really cool automatic tool that rewrites floating point expressions to be more accurate: https://herbie.uwplse.org/