ITensors.jl
Octavian.jl
ITensors.jl | Octavian.jl | |
---|---|---|
4 | 17 | |
485 | 222 | |
1.6% | 0.0% | |
9.4 | 3.9 | |
7 days ago | about 1 month ago | |
Julia | Julia | |
Apache License 2.0 | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ITensors.jl
-
A question relating to the BCS theory ground state
DMRG packages are available in Julia and C++ and Python. (Don't use Fortran. But here is a Fortran library if you insist.)
-
To those working in computational physics, what do you think of Julia?
As one example, one of the leading libraries for tensor network simulations (https://itensor.org) has recently been rewritten in Julia (previously was c++) and the flatiron institute who develops it (which is certainly one of the leading Computational physics institutions in the world) is advising new users to use the Julia version. I also know some other computational groups which use Julia, even for things like quantum Monte Carlo (where I personally would have believed c++ to have an edge but people tell me different)! I think when even leading computational groups switch, Julia is almost always the much better option for the average user if you write your code from scratch (a situation not so rare in condensed matter). If you need to use some libraries or legacy code, this obviously changes the situation.
-
Julia 1.8 has been released
> One thing that supports this view is that there are several Julia packages that are wrappers around existing C/Fortran/C++ libraries, and basically no examples (that I know) of people porting existing libraries to Julia.
As with the others, I'll strongly disagree and chime in with a few examples off the top of my head:
* ITensors.jl : They started moving from a C++ to Julia a couple years ago and now their webpage doesn't even mention their original C++ implementation on its homepage anymore https://itensor.org/
* DifferentialEquations.jl : This has many state of the art differentiatial equation solving facilities in it, many of which are improvements over old Fortran libraries.
* SpecialFunctions.jl, Julia's own libm, Bessels.jl, SLEEFPirates.jl : Many core math functions have ancient Fortran or C implementations from OpenLibm or whatever, and they're being progressively replaced with better, faster versions written in pure julia that outperform the old versions.
-
Initializing an n^k array as a sparse array?
Otherwise, maybe check ITensors.jl or look for packages that want to do the same thing?
Octavian.jl
- Yann Lecun: ML would have advanced if other lang had been adopted versus Python
-
Julia 1.8 has been released
For some examples of people porting existing C++ Fortran libraries to julia, you should check out https://github.com/JuliaLinearAlgebra/Octavian.jl, https://github.com/dgleich/GenericArpack.jl, https://github.com/apache/arrow-julia (just off the top of my head). These are all ports of C++ or Fortran libraries that match (or exceed) performance of the original, and in the case of Arrow.jl is faster, more general, and 10x less code.
-
Why Julia matrix multiplication so slow in this test?
Note that a performance-optimized Julia implementation is on par or even outperform the specialized high-performance BLAS libraries, see https://github.com/JuliaLinearAlgebra/Octavian.jl .
-
Multiple dispatch: Common Lisp vs Julia
If you look at the thread for your first reference, there were a large number of performance improvements suggested that resulted in a 30x speedup when combined. I'm not sure what you're looking at for your second link, but Julia is faster than Lisp in n-body, spectral norm, mandelbrot, pidigits, regex, fasta, k-nucleotide, and reverse compliment benchmarks. (8 out of 10). For Julia going faster than C/Fortran, I would direct you to https://github.com/JuliaLinearAlgebra/Octavian.jl which is a julia program that beats MKL and openblas for matrix multiplication (which is one of the most heavily optimized algorithms in the world).
-
Why Fortran is easy to learn
> But in the end, it's FORTRAN all the way down. Even in Julia.
That's not true. None of the Julia differential equation solver stack is calling into Fortran anymore. We have our own BLAS tools that outperform OpenBLAS and MKL in the instances we use it for (mostly LU-factorization) and those are all written in pure Julia. See https://github.com/YingboMa/RecursiveFactorization.jl, https://github.com/JuliaSIMD/TriangularSolve.jl, and https://github.com/JuliaLinearAlgebra/Octavian.jl. And this is one part of the DiffEq performance story. The performance of this of course is all validated on https://github.com/SciML/SciMLBenchmarks.jl
-
Show HN: prometeo – a Python-to-C transpiler for high-performance computing
Well IMO it can definitely be rewritten in Julia, and to an easier degree than python since Julia allows hooking into the compiler pipeline at many areas of the stack. It's lispy an built from the ground up for codegen, with libraries like (https://github.com/JuliaSymbolics/Metatheory.jl) that provide high level pattern matching with e-graphs. The question is whether it's worth your time to learn Julia to do so.
You could also do it at the LLVM level: https://github.com/JuliaComputingOSS/llvm-cbe
For interesting takes on that, you can see https://github.com/JuliaLinearAlgebra/Octavian.jl which relies on loopvectorization.jl to do transforms on Julia AST beyond what LLVM does. Because of that, Octavian.jl beats openblas on many linalg benchmarks
-
Python behind the scenes #13: the GIL and its effects on Python multithreading
The initial results are that libraries like LoopVectorization can already generate optimal micro-kernels, and is competitive with MKL (for square matrix-matrix multiplication) up to around size 512. With help on macro-kernel side from Octavian, Julia is able to outperform MKL for sizes up to to 1000 or so (and is about 20% slower for bigger sizes). https://github.com/JuliaLinearAlgebra/Octavian.jl.
-
From Julia to Rust
> The biggest reason is because some function of the high level language is incompatible with the application domain. Like garbage collection in hot or real-time code or proprietary compilers for processors. Julia does not solve these problems.
The presence of garbage collection in julia is not a problem at all for hot, high performance code. There's nothing stopping you from manually managing your memory in julia.
The easiest way would be to just preallocate your buffers and hold onto them so they don't get collected. Octavian.jl is a BLAS library written in julia that's faster than OpenBLAS and MKL for small matrices and saturates to the same speed for very large matrices [1]. These are some of the hottest loops possible!
For true, hard-real time, yes julia is not a good choice but it's perfectly fine for soft realtime.
[1] https://github.com/JuliaLinearAlgebra/Octavian.jl/issues/24#...
-
Julia 1.6 addresses latency issues
If you want performance benchmarks vs Fortran, https://benchmarks.sciml.ai/html/MultiLanguage/wrapper_packa... has benchmarks with Julia out-performing highly optimized Fortran DiffEq solvers, and https://github.com/JuliaLinearAlgebra/Octavian.jl shows that pure Julia BLAS implementations can compete with MKL and openBLAS, which are among the most heavily optimized pieces of code ever written. Furthermore, Julia has been used on some of the world's fastest super-computers (in the performance critical bits), which as far as I know isn't true of Swift/Kotlin/C#.
Expressiveness is hard to judge objectively, but in my opinion at least, Multiple Dispatch is a massive win for writing composable, re-usable code, and there really isn't anything that compares on that front to Julia.
- Octavian.jl – BLAS-like Julia procedures for CPU
What are some alternatives?
Fastor - A lightweight high performance tensor algebra framework for modern C++
OpenBLAS - OpenBLAS is an optimized BLAS library based on GotoBLAS2 1.13 BSD version.
danfojs - Danfo.js is an open source, JavaScript library providing high performance, intuitive, and easy to use data structures for manipulating and processing structured data.
Symbolics.jl - Symbolic programming for the next generation of numerical software
Measurements.jl - Error propagation calculator and library for physical measurements. It supports real and complex numbers with uncertainty, arbitrary precision calculations, operations with arrays, and numerical integration.
owl - Owl - OCaml Scientific Computing @ https://ocaml.xyz
NTNk.jl - Unsupervised Machine Learning: Nonnegative Tensor Networks + k-means clustering
Verilog.jl - Verilog for Julia
ProtoStructs.jl - Easy prototyping of structs
Automa.jl - A julia code generator for regular expressions
RecursiveArrayTools.jl - Tools for easily handling objects like arrays of arrays and deeper nestings in scientific machine learning (SciML) and other applications
StaticCompiler.jl - Compiles Julia code to a standalone library (experimental)