SumTypes.jl
glow
SumTypes.jl | glow | |
---|---|---|
2 | 6 | |
92 | 3,164 | |
- | 1.4% | |
7.8 | 8.2 | |
3 months ago | 6 days ago | |
Julia | C++ | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
SumTypes.jl
-
Enums in Rust – and why they feel better
An interesting aspect of sum types (what Rust calls enums) is that you can implement them in the language as a library if you have real unions, but not vice-versa.
Here's my example of sum types being implemented in julia as a regular package: https://github.com/MasonProtter/SumTypes.jl
-
From Julia to Rust
> Pattern matching
MLStyle.jl [1] is quite nice for this and has been around for a while.
> Tagged, closed unions
These are less general than 'real' unions and can be implemented using them. E.g. SumTypes.jl [2] has some macros to make it a bit more convenient to define them, it could use some other quality of life features though.
[1] https://thautwarm.github.io/MLStyle.jl/latest/syntax/pattern...
[2] https://github.com/MasonProtter/SumTypes.jl
glow
-
Accelerating AI inference?
Pytorch supports other kinds of accelerators (e.g. FPGA, and https://github.com/pytorch/glow), but unless you want to become a ML systems engineer and have money and time to throw away, or a business case to fund it, it is not worth it. In general, both pytorch and tensorflow have hardware abstractions that will compile down to device code. (XLA, https://github.com/pytorch/xla, https://github.com/pytorch/glow). TPUs and GPUs have very different strengths; so getting top performance requires a lot of manual optimizations. Considering the the cost of training LLM, it is time well spent.
-
Decompiling x86 Deep Neural Network Executables
It's pretty clear its referring to the output of Apache TVM and Meta's Glow
-
US government bans export of NVIDIA A100 to China and Russia, effective immediately
I also disagree with this. For example, Meta seems desperate about AI accelerators, and in fact is already doing "hardware customers develop software stack themselves" I mentioned above: Glow is that stack. Meta is doing Glow even if there is no promising AI accelerators right now, they are that desperate.
-
If data science uses a lot of computational power, then why is python the most used programming language?
For reference: In Tensorflow and JAX, for example, the tensor gets compiled to the intermediate XLA format (https://www.tensorflow.org/xla), then passed to the XLA complier (https://github.com/tensorflow/tensorflow/tree/master/tensorflow/compiler/xla/service) or the new TFRT runtime (https://github.com/tensorflow/runtime/blob/master/documents/tfrt_host_runtime_design.md), or some more esoteric hardware (https://github.com/pytorch/glow).
-
Esperanto Champions the Efficiency of Its 1,092-Core RISC-V Chip
The main reasons are hiring, and depth and breadth of the product.
Compilers are hard, device support is hard, the compiler community is small and closed source compilers quickly become weird tech islands.
https://github.com/pytorch/glow
- From Julia to Rust
What are some alternatives?
Octavian.jl - Multi-threaded BLAS-like library that provides pure Julia matrix multiplication
tvm - Open deep learning compiler stack for cpu, gpu and specialized accelerators
StaticArrays.jl - Statically sized arrays for Julia
serving - A flexible, high-performance serving system for machine learning models
Catlab.jl - A framework for applied category theory in the Julia language
XLA.jl - Julia on TPUs
Symbolics.jl - Symbolic programming for the next generation of numerical software
egg - egg is a flexible, high-performance e-graph library
Metatheory.jl - General purpose algebraic metaprogramming and symbolic computation library for the Julia programming language: E-Graphs & equality saturation, term rewriting and more.
runtime - A performant and modular runtime for TensorFlow