Symbolics.jl
glow
Symbolics.jl | glow | |
---|---|---|
13 | 6 | |
1,291 | 3,151 | |
1.2% | 1.0% | |
9.4 | 8.2 | |
5 days ago | 4 days ago | |
Julia | C++ | |
GNU General Public License v3.0 or later | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Symbolics.jl
- Symbolics.jl
-
What packages would you like Julia to have?
It’s not up to parity with SymPy/Matlab by far yet - here’s the tracking issue on it https://github.com/JuliaSymbolics/Symbolics.jl/issues/59
- Converting Symbolics.jl Objects to SymPy.jl Objects
-
Error With StaticArrays Module & Symbolics.jl
Hello Juila Community. This is my second day working with Julia, having come over from Sympy due to performance reasons. I am working on a project that requires calculating matrix determinants and adjugates for families of matrices with symbolics entries. I am using Symbolics.jl for the symbols and using Juilia 1.8.2.
- ModelingToolkit over Modelica
-
A Mature Library For Symbolic Computation?
After spending some time reading the documentation, it turns out that JuliaSymbolics also lacks factorizations functionality (according to [Link](https://github.com/JuliaSymbolics/Symbolics.jl/issues/59))
-
Looking for numerical/iterative approach for determining a value
You can also get an expression for the partial of β with respect to h using Symbolics.jl:
-
In 2022, the difference between symbolic computing and compiler optimizations will be erased in #julialang. Anyone who can come up with a set of symbolic mathematical rules will automatically receive an optimized compiler pass to build better code
The example is applied to the right-hand side of a generated mass-matrix ODE (DAE) which is then solved using the adaptive time stepping methods of DifferentialEquations.jl. It's a test example that comes from the robotics / rigid body dynamics simulation groups (specifically interested in control) where they before were generating the governing equations with SymPy, and recently switched to try Symbolics.jl (and we got the example because of some performance issues that needed fixing). The comparison is with and without applying the code simplifier before solving. The table shows an average global induced error of 1e-12 when chopping off the 1e-11 * sin(x) terms and smaller. Thus there's nothing "competitive" against standard adaptive time stepping here: it's used to enhance the simulation of generated models that are simulated with the adaptive time steppers.
- From Julia to Rust
-
Fractions in Julia Symbolics
Done. https://github.com/JuliaSymbolics/Symbolics.jl/issues/215
glow
-
Accelerating AI inference?
Pytorch supports other kinds of accelerators (e.g. FPGA, and https://github.com/pytorch/glow), but unless you want to become a ML systems engineer and have money and time to throw away, or a business case to fund it, it is not worth it. In general, both pytorch and tensorflow have hardware abstractions that will compile down to device code. (XLA, https://github.com/pytorch/xla, https://github.com/pytorch/glow). TPUs and GPUs have very different strengths; so getting top performance requires a lot of manual optimizations. Considering the the cost of training LLM, it is time well spent.
-
Decompiling x86 Deep Neural Network Executables
It's pretty clear its referring to the output of Apache TVM and Meta's Glow
-
US government bans export of NVIDIA A100 to China and Russia, effective immediately
I also disagree with this. For example, Meta seems desperate about AI accelerators, and in fact is already doing "hardware customers develop software stack themselves" I mentioned above: Glow is that stack. Meta is doing Glow even if there is no promising AI accelerators right now, they are that desperate.
-
If data science uses a lot of computational power, then why is python the most used programming language?
For reference: In Tensorflow and JAX, for example, the tensor gets compiled to the intermediate XLA format (https://www.tensorflow.org/xla), then passed to the XLA complier (https://github.com/tensorflow/tensorflow/tree/master/tensorflow/compiler/xla/service) or the new TFRT runtime (https://github.com/tensorflow/runtime/blob/master/documents/tfrt_host_runtime_design.md), or some more esoteric hardware (https://github.com/pytorch/glow).
-
Esperanto Champions the Efficiency of Its 1,092-Core RISC-V Chip
The main reasons are hiring, and depth and breadth of the product.
Compilers are hard, device support is hard, the compiler community is small and closed source compilers quickly become weird tech islands.
https://github.com/pytorch/glow
- From Julia to Rust
What are some alternatives?
julia - The Julia Programming Language
tvm - Open deep learning compiler stack for cpu, gpu and specialized accelerators
Octavian.jl - Multi-threaded BLAS-like library that provides pure Julia matrix multiplication
serving - A flexible, high-performance serving system for machine learning models
ModelingToolkit.jl - An acausal modeling framework for automatically parallelized scientific machine learning (SciML) in Julia. A computer algebra system for integrated symbolics for physics-informed machine learning and automated transformations of differential equations
XLA.jl - Julia on TPUs
fricas - Official repository of the FriCAS computer algebra system
StaticArrays.jl - Statically sized arrays for Julia
Dagger.jl - A framework for out-of-core and parallel execution
egg - egg is a flexible, high-performance e-graph library
runtime - A performant and modular runtime for TensorFlow