Symbolics.jl
Dagger.jl
Symbolics.jl | Dagger.jl | |
---|---|---|
13 | 4 | |
1,291 | 578 | |
1.2% | 1.2% | |
9.4 | 8.9 | |
5 days ago | 10 days ago | |
Julia | Julia | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Symbolics.jl
- Symbolics.jl
-
What packages would you like Julia to have?
It’s not up to parity with SymPy/Matlab by far yet - here’s the tracking issue on it https://github.com/JuliaSymbolics/Symbolics.jl/issues/59
- Converting Symbolics.jl Objects to SymPy.jl Objects
-
Error With StaticArrays Module & Symbolics.jl
Hello Juila Community. This is my second day working with Julia, having come over from Sympy due to performance reasons. I am working on a project that requires calculating matrix determinants and adjugates for families of matrices with symbolics entries. I am using Symbolics.jl for the symbols and using Juilia 1.8.2.
- ModelingToolkit over Modelica
-
A Mature Library For Symbolic Computation?
After spending some time reading the documentation, it turns out that JuliaSymbolics also lacks factorizations functionality (according to [Link](https://github.com/JuliaSymbolics/Symbolics.jl/issues/59))
-
Looking for numerical/iterative approach for determining a value
You can also get an expression for the partial of β with respect to h using Symbolics.jl:
-
In 2022, the difference between symbolic computing and compiler optimizations will be erased in #julialang. Anyone who can come up with a set of symbolic mathematical rules will automatically receive an optimized compiler pass to build better code
The example is applied to the right-hand side of a generated mass-matrix ODE (DAE) which is then solved using the adaptive time stepping methods of DifferentialEquations.jl. It's a test example that comes from the robotics / rigid body dynamics simulation groups (specifically interested in control) where they before were generating the governing equations with SymPy, and recently switched to try Symbolics.jl (and we got the example because of some performance issues that needed fixing). The comparison is with and without applying the code simplifier before solving. The table shows an average global induced error of 1e-12 when chopping off the 1e-11 * sin(x) terms and smaller. Thus there's nothing "competitive" against standard adaptive time stepping here: it's used to enhance the simulation of generated models that are simulated with the adaptive time steppers.
- From Julia to Rust
-
Fractions in Julia Symbolics
Done. https://github.com/JuliaSymbolics/Symbolics.jl/issues/215
Dagger.jl
- Dagger: a new way to build CI/CD pipelines
-
DTable a new distributed table implementation in Julia using Dagger.jl
Firstly, I'll say that we already have work started to implement out-of-core directly in Dagger: https://github.com/JuliaParallel/Dagger.jl/pull/289.
With that PR in place, it should be possible to define a "storage device" which is backed by a database. I haven't had a chance to actually try this, since the PR still needs quite some work and testing, but it's definitely something on my radar!
- From Julia to Rust
-
Cerebras’ New Monster AI Chip Adds 1.4T Transistors
I'm not sure that's necessarily the domain of a low-level package like CUDA.jl though (which I assume you're referring to). That kind of interface is more the domain of higher-level packages like https://github.com/JuliaParallel/Dagger.jl/ and to a lesser extent https://juliagpu.github.io/KernelAbstractions.jl/stable/. Moreover, the jury is still out on whether the built-in Distributed module is an ideal abstraction for every use-case (clusters, heterogeneous compute, etc.)
WRT Nx, my biggest question is how they'll crack the problem of still needing big balls of C++ and the shims everywhere to get acceleration. Creating a compiler that generates efficient GPU or other accelerator code is a massive research project with no clear winners, never mind the challenge of reconciling the very mutation-heavy needs of GPU compute with a mostly immutable language model.
What are some alternatives?
julia - The Julia Programming Language
earthly - Super simple build framework with fast, repeatable builds and an instantly familiar syntax – like Dockerfile and Makefile had a baby.
Octavian.jl - Multi-threaded BLAS-like library that provides pure Julia matrix multiplication
ModelingToolkit.jl - An acausal modeling framework for automatically parallelized scientific machine learning (SciML) in Julia. A computer algebra system for integrated symbolics for physics-informed machine learning and automated transformations of differential equations
DuckDB.jl
fricas - Official repository of the FriCAS computer algebra system
determined - Determined is an open-source machine learning platform that simplifies distributed training, hyperparameter tuning, experiment tracking, and resource management. Works with PyTorch and TensorFlow.
egg - egg is a flexible, high-performance e-graph library
Metatheory.jl - General purpose algebraic metaprogramming and symbolic computation library for the Julia programming language: E-Graphs & equality saturation, term rewriting and more.
StaticArrays.jl - Statically sized arrays for Julia
dagger-for-github - GitHub Action for Dagger