Fortran-code-on-GitHub
Octavian.jl
Fortran-code-on-GitHub | Octavian.jl | |
---|---|---|
9 | 17 | |
261 | 222 | |
- | 0.0% | |
9.8 | 3.9 | |
2 days ago | 28 days ago | |
Julia | ||
The Unlicense | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Fortran-code-on-GitHub
- Fortran 2023 has been published
-
Any help or tips for Neural Networks on Computer Clusters
The hints in place ("there is more infrastructure already available outside Fortran, consider using them instead"). Beliavsky's compilation Fortran code on GitHub with its section about neural networks and machine learning still may be worth a visit e.g. how let Fortran reach out for the implementations in other languages.
-
Is Fortran good to program IA ?
There is an interesting directories compiled about projects around Fortran, Fortran code on GitHub. Though artificial intelligence does not appear by name, section Neural networks and Machine Learning may provide an entry.
- Directory of Fortran codes on GitHub, arranged by topic
-
how do you deal with not having common useful functions and data-structures that languages like c++ have?
My list of Fortran codes on GitHub has a section Containers and Generic Programming with some of the data structures you mention.
-
Why Fortran is easy to learn
There's modern stuff being written in astro(nomy/physics) (I can attest to some of the codebases listed in https://github.com/Beliavsky/Fortran-code-on-GitHub#astrophy... being modern, at least in terms of development), but I'd say C++ likely does have the upper hand for newer codebases (unless things have changed dramatically last time I looked, algorithms that don't nicely align with nd-arrays are still painful in Fortran).
I've also heard rumours of Julia and even Rust being used (the latter because of the ability to reuse libraries in the browser e.g. for visualisation), but the writers of these codebases (and the Fortran/C/C++/Java) are unusual—Python and R (and for some holdouts, IDL) are what are most people write in (even if those languages call something else).
-
Ask HN: What tools do people use for Computational Economics?
"QuantEcon:Open source code for economic modeling" https://quantecon.org/ has Python and Julia versions. The Federal Reserve uses Julia in its macroeconomic models: https://frbny-dsge.github.io/DSGE.jl/latest/ . Some economists use Fortran (which is much modernized since FORTRAN 77), and there is a 2018 book Introduction to Computational Economics using Fortran https://www.ce-fortran.com/ . Some Fortran codes in economics, statistics, and time series analysis are listed at https://github.com/Beliavsky/Fortran-code-on-GitHub .
-
Climate Change Open Source Projects on GitHub
At the "Fortran Code on GitHub" repo https://github.com/Beliavsky/Fortran-code-on-GitHub there are many codes listed in the "Climate and Weather" and "Earth Science" sections.
-
A simple string handling library for Microsoft Fortran-80
Fortran 77 and later versions (most recently Fortran 2018) have strings. There is the limitation that the elements of an array of strings must have equal length, so that ["boy","girl"] is invalid but ["boy ","girl"] is. Libraries for manipulating strings in Fortran are listed at https://github.com/Beliavsky/Fortran-code-on-GitHub#strings .
Octavian.jl
- Yann Lecun: ML would have advanced if other lang had been adopted versus Python
-
Julia 1.8 has been released
For some examples of people porting existing C++ Fortran libraries to julia, you should check out https://github.com/JuliaLinearAlgebra/Octavian.jl, https://github.com/dgleich/GenericArpack.jl, https://github.com/apache/arrow-julia (just off the top of my head). These are all ports of C++ or Fortran libraries that match (or exceed) performance of the original, and in the case of Arrow.jl is faster, more general, and 10x less code.
-
Why Julia matrix multiplication so slow in this test?
Note that a performance-optimized Julia implementation is on par or even outperform the specialized high-performance BLAS libraries, see https://github.com/JuliaLinearAlgebra/Octavian.jl .
-
Multiple dispatch: Common Lisp vs Julia
If you look at the thread for your first reference, there were a large number of performance improvements suggested that resulted in a 30x speedup when combined. I'm not sure what you're looking at for your second link, but Julia is faster than Lisp in n-body, spectral norm, mandelbrot, pidigits, regex, fasta, k-nucleotide, and reverse compliment benchmarks. (8 out of 10). For Julia going faster than C/Fortran, I would direct you to https://github.com/JuliaLinearAlgebra/Octavian.jl which is a julia program that beats MKL and openblas for matrix multiplication (which is one of the most heavily optimized algorithms in the world).
-
Why Fortran is easy to learn
> But in the end, it's FORTRAN all the way down. Even in Julia.
That's not true. None of the Julia differential equation solver stack is calling into Fortran anymore. We have our own BLAS tools that outperform OpenBLAS and MKL in the instances we use it for (mostly LU-factorization) and those are all written in pure Julia. See https://github.com/YingboMa/RecursiveFactorization.jl, https://github.com/JuliaSIMD/TriangularSolve.jl, and https://github.com/JuliaLinearAlgebra/Octavian.jl. And this is one part of the DiffEq performance story. The performance of this of course is all validated on https://github.com/SciML/SciMLBenchmarks.jl
-
Show HN: prometeo – a Python-to-C transpiler for high-performance computing
Well IMO it can definitely be rewritten in Julia, and to an easier degree than python since Julia allows hooking into the compiler pipeline at many areas of the stack. It's lispy an built from the ground up for codegen, with libraries like (https://github.com/JuliaSymbolics/Metatheory.jl) that provide high level pattern matching with e-graphs. The question is whether it's worth your time to learn Julia to do so.
You could also do it at the LLVM level: https://github.com/JuliaComputingOSS/llvm-cbe
For interesting takes on that, you can see https://github.com/JuliaLinearAlgebra/Octavian.jl which relies on loopvectorization.jl to do transforms on Julia AST beyond what LLVM does. Because of that, Octavian.jl beats openblas on many linalg benchmarks
-
Python behind the scenes #13: the GIL and its effects on Python multithreading
The initial results are that libraries like LoopVectorization can already generate optimal micro-kernels, and is competitive with MKL (for square matrix-matrix multiplication) up to around size 512. With help on macro-kernel side from Octavian, Julia is able to outperform MKL for sizes up to to 1000 or so (and is about 20% slower for bigger sizes). https://github.com/JuliaLinearAlgebra/Octavian.jl.
-
From Julia to Rust
> The biggest reason is because some function of the high level language is incompatible with the application domain. Like garbage collection in hot or real-time code or proprietary compilers for processors. Julia does not solve these problems.
The presence of garbage collection in julia is not a problem at all for hot, high performance code. There's nothing stopping you from manually managing your memory in julia.
The easiest way would be to just preallocate your buffers and hold onto them so they don't get collected. Octavian.jl is a BLAS library written in julia that's faster than OpenBLAS and MKL for small matrices and saturates to the same speed for very large matrices [1]. These are some of the hottest loops possible!
For true, hard-real time, yes julia is not a good choice but it's perfectly fine for soft realtime.
[1] https://github.com/JuliaLinearAlgebra/Octavian.jl/issues/24#...
-
Julia 1.6 addresses latency issues
If you want performance benchmarks vs Fortran, https://benchmarks.sciml.ai/html/MultiLanguage/wrapper_packa... has benchmarks with Julia out-performing highly optimized Fortran DiffEq solvers, and https://github.com/JuliaLinearAlgebra/Octavian.jl shows that pure Julia BLAS implementations can compete with MKL and openBLAS, which are among the most heavily optimized pieces of code ever written. Furthermore, Julia has been used on some of the world's fastest super-computers (in the performance critical bits), which as far as I know isn't true of Swift/Kotlin/C#.
Expressiveness is hard to judge objectively, but in my opinion at least, Multiple Dispatch is a massive win for writing composable, re-usable code, and there really isn't anything that compares on that front to Julia.
- Octavian.jl – BLAS-like Julia procedures for CPU
What are some alternatives?
stdlib - Fortran Standard Library
OpenBLAS - OpenBLAS is an optimized BLAS library based on GotoBLAS2 1.13 BSD version.
cmake-cookbook - CMake Cookbook recipes.
Symbolics.jl - Symbolic programming for the next generation of numerical software
dockcross - Cross compiling toolchains in Docker images
owl - Owl - OCaml Scientific Computing @ https://ocaml.xyz
fpm - Fortran Package Manager (fpm)
Verilog.jl - Verilog for Julia
neural-fortran - A parallel framework for deep learning
Automa.jl - A julia code generator for regular expressions
string - Microsoft FORTRAN-80 (F80) string handling library. Simple, fast, mostly FORTRAN.
StaticCompiler.jl - Compiles Julia code to a standalone library (experimental)