|2 months ago||about 1 month ago|
|Mozilla Public License 2.0||GNU General Public License v3.0 or later|
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Rust and what it needs to gain space in computation-oriented applications
7 projects | reddit.com/r/rust | 24 Nov 2021
You should check out polars, datafusion, influxdb iox and databend, all written in native Rust and powered by the Apache Arrow format. Polars in particular is pretty dam fast and has bindings for Python.
Database-Like Ops Benchmark
1 project | news.ycombinator.com | 20 Nov 2021
A better dtypes for pandas dataframes pulled from Postgres
1 project | reddit.com/r/datascience | 14 Nov 2021
Here is a good comparison: https://h2oai.github.io/db-benchmark/
Introducing tidypolars - a Python data frame package with syntax familiar to R tidyverse users
4 projects | reddit.com/r/datascience | 10 Nov 2021
The biggest difference with this one is that it's built on top of the polars package, which is probably the fastest data frame manipulation library out there. All of the other dplyr-to-python packages are build on top of pandas (which is very slow in comparison).
Introducing tidypolars - a Python data frame package for R tidyverse users
9 projects | reddit.com/r/rstats | 10 Nov 2021
I think having a basic understanding of pandas, given how broadly it's used, is beneficial. That being said, polars seems to be matching or beating data.table in performance, so I think it'd be very worth it to take it up. Wes McKinney, creator of pandas, has been quite vocal about architecture flaws of pandas -- which is why he's been working on the Arrow project. polars is based on Arrow, so in principle it's kinda like pandas 2.0 (adopting the changes that Wes proposed).9 projects | reddit.com/r/rstats | 10 Nov 2021
tidypolars uses the polars package as a backend, which might be the fastest data frame manipulation library out there. (Faster even than R's data.table, which has been the king of speed for many years.)
Your perfect program/language for experience studies?
1 project | reddit.com/r/actuary | 4 Nov 2021
Julia has ExperienceStudies.jl to help with exposure calculations and MortalityTables.jl for mortality rate data. It also performs very well in data science benchmarks: https://h2oai.github.io/db-benchmark/
Comparing SQLite, DuckDB and Arrow
5 projects | news.ycombinator.com | 27 Oct 2021
this benchmark is more comprehensive for this type of analytical work:
1 project | reddit.com/r/datascience | 23 Oct 2021
Data too big to work with memory you can do in R too, using SparkR. I agree the documentation to something like PySpark is better though. For data within memory, data.table in R beats pandas. Loses to Polars (implemented in Rust that has bindings in Python) but that is not in use much as its new: https://github.com/h2oai/db-benchmark.
Turning database into a searchable dashboard?
3 projects | reddit.com/r/datascience | 21 Oct 2021
Show HN: prometeo – a Python-to-C transpiler for high-performance computing
19 projects | news.ycombinator.com | 17 Nov 2021
Well IMO it can definitely be rewritten in Julia, and to an easier degree than python since Julia allows hooking into the compiler pipeline at many areas of the stack. It's lispy an built from the ground up for codegen, with libraries like (https://github.com/JuliaSymbolics/Metatheory.jl) that provide high level pattern matching with e-graphs. The question is whether it's worth your time to learn Julia to do so.
You could also do it at the LLVM level: https://github.com/JuliaComputingOSS/llvm-cbe
For interesting takes on that, you can see https://github.com/JuliaLinearAlgebra/Octavian.jl which relies on loopvectorization.jl to do transforms on Julia AST beyond what LLVM does. Because of that, Octavian.jl beats openblas on many linalg benchmarks
Python behind the scenes #13: the GIL and its effects on Python multithreading
2 projects | news.ycombinator.com | 29 Sep 2021
The initial results are that libraries like LoopVectorization can already generate optimal micro-kernels, and is competitive with MKL (for square matrix-matrix multiplication) up to around size 512. With help on macro-kernel side from Octavian, Julia is able to outperform MKL for sizes up to to 1000 or so (and is about 20% slower for bigger sizes). https://github.com/JuliaLinearAlgebra/Octavian.jl.
From Julia to Rust
14 projects | news.ycombinator.com | 5 Jun 2021
> The biggest reason is because some function of the high level language is incompatible with the application domain. Like garbage collection in hot or real-time code or proprietary compilers for processors. Julia does not solve these problems.
The presence of garbage collection in julia is not a problem at all for hot, high performance code. There's nothing stopping you from manually managing your memory in julia.
The easiest way would be to just preallocate your buffers and hold onto them so they don't get collected. Octavian.jl is a BLAS library written in julia that's faster than OpenBLAS and MKL for small matrices and saturates to the same speed for very large matrices . These are some of the hottest loops possible!
For true, hard-real time, yes julia is not a good choice but it's perfectly fine for soft realtime.
Julia 1.6 addresses latency issues
5 projects | news.ycombinator.com | 25 May 2021
If you want performance benchmarks vs Fortran, https://benchmarks.sciml.ai/html/MultiLanguage/wrapper_packa... has benchmarks with Julia out-performing highly optimized Fortran DiffEq solvers, and https://github.com/JuliaLinearAlgebra/Octavian.jl shows that pure Julia BLAS implementations can compete with MKL and openBLAS, which are among the most heavily optimized pieces of code ever written. Furthermore, Julia has been used on some of the world's fastest super-computers (in the performance critical bits), which as far as I know isn't true of Swift/Kotlin/C#.
Expressiveness is hard to judge objectively, but in my opinion at least, Multiple Dispatch is a massive win for writing composable, re-usable code, and there really isn't anything that compares on that front to Julia.
Octavian.jl – BLAS-like Julia procedures for CPU
1 project | news.ycombinator.com | 23 May 2021
An early look at Postgres 14 performance and monitoring improvements
4 projects | news.ycombinator.com | 22 May 2021
Rust vs Fortran
2 projects | reddit.com/r/ProgrammingLanguages | 3 Apr 2021
Concerning the second point, there seem to be ways to get even faster code than by most carefully optimizing it by hand, which is code that is automatically adapted to both problem and hardware. Here are benchmark results for comparing matmul performance of the highly optimized packages openBLAS and MKL with automatic optimizations written in Julia. Octavian.jl reaches superior performance for small-medium matrix sizes and is similar for large matrices. more benchmarks
Programmers of Reddit whats your favourite programming language and why?
6 projects | reddit.com/r/AskReddit | 31 Mar 2021
If you want to get maximum performance, it is more effort, but still possible in Julia itself. For example there is https://github.com/JuliaLinearAlgebra/Octavian.jl - a project for implementing BLAS in Julia. It is still very WIP and immature, but able to reach the performance of OpenBLAS / MKL (probably one of the most optimized libraries ever), and sometimes even surpass (https://github.com/JuliaLinearAlgebra/Octavian.jl/issues/24#issuecomment-766243445).
Julia Receives DARPA Award to Accelerate Electronics Simulation by 1,000x
7 projects | news.ycombinator.com | 11 Mar 2021
The pure Julia (sub)BLAS (because they are incomplete right now) that benchmarks the best right now are Octavian and PaddedMatrices.jl. On Ryzen these BLAS's are doing extremely well:
but also on Intel:
I personally wouldn't spend too much time on BLAS-limited applications though, and this kind of circuit modeling is not one of them as I describe in another post. Also, it's 1000x at 99% accuracy: it's essentially a form of automated model order reduction which allows you to choose a tolerance and get more speedup matching the original circuit to the given tolerance.
What do you guys think of Julia in terms of speed?
1 project | reddit.com/r/cpp_questions | 12 Feb 2021
What are some alternatives?
arrow-datafusion - Apache Arrow DataFusion and Ballista query engines
polars - Fast multi-threaded DataFrame library in Rust and Python
DataFramesMeta.jl - Metaprogramming tools for DataFrames
Automa.jl - A julia code generator for regular expressions
csvs-to-sqlite - Convert CSV files into a SQLite database
sktime - A unified framework for machine learning with time series
Symbolics.jl - A fast and modern CAS for a fast and modern language.
Preql - An interpreted relational query language that compiles to SQL.
arrow-rs - Official Rust implementation of Apache Arrow
databend - An elastic and reliable Serverless Data Warehouse, offers Blazing Fast Query and combines Elasticity, Simplicity, Low cost of the Cloud, built to make the Data Cloud easy
skorch - A scikit-learn compatible neural network library that wraps PyTorch
owl - Owl - OCaml Scientific and Engineering Computing @ http://ocaml.xyz