LoopVectorization.jl VS julia

Compare LoopVectorization.jl vs julia and see what are their differences.

Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
LoopVectorization.jl julia
10 350
720 44,510
0.0% 0.9%
7.6 10.0
1 day ago 3 days ago
Julia Julia
MIT License MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

LoopVectorization.jl

Posts with mentions or reviews of LoopVectorization.jl. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-05-02.
  • Mojo – a new programming language for all AI developers
    7 projects | news.ycombinator.com | 2 May 2023
    It is a little disappointing that they're setting the bar against vanilla Python in their comparisons. While I'm sure they have put massive engineering effort into their ML compiler, the demos they showed of matmul are not that impressive in an absolute sense; with the analogous Julia code, making use of [LoopVectorization.jl](https://github.com/JuliaSIMD/LoopVectorization.jl) to automatically choose good defaults for vectorization, etc...

    ```

  • Knight’s Landing: Atom with AVX-512
    1 project | news.ycombinator.com | 10 Dec 2022
  • Python 3.11 is 25% faster than 3.10 on average
    13 projects | news.ycombinator.com | 6 Jul 2022
    > My mistake in retrospect was using small arrays as part of a struct, which being immutable got replaced at each time step with a new struct requiring new arrays to be allocated and initialized. I would not have done that in c++, but julia puts my brain in matlab mode.

    I see. Yes, it's an interesting design space where Julia makes both heap and stack allocations easy enough, so sometimes you just reach for the heap like in MATLAB mode. Hopefully Prem and Shuhei's work lands soon enough to stack allocate small non-escaping arrays so that user's done need to think about this.

    > Alignment I'd assumed, but padding the struct instead of the tuple did nothing, so probably extra work to clear a piece of an simd load. Any insight on why avx availability didn't help would be appreciated. I did verify some avx instructions were in the asm it generated, so it knew, it just didn't use.

    The major differences at this point seem to come down to GCC (g++) vs LLVM and proofs of aliasing. LLVM's auto-vectorizer isn't that great, and it seems to be able to prove 2 arrays are not aliasing less reliably. For the first part, some people have just improved the loop analysis code from the Julia side (https://github.com/JuliaSIMD/LoopVectorization.jl), forcing SIMD onto LLVM can help it make the right choices. But for the second part you do need to do `@simd ivdep for ...` (or use LoopVectorization.jl) to match some C++ examples. This is hopefully one of the things that the JET.jl and other new analysis passes can help with, along with the new effects system (see https://github.com/JuliaLang/julia/pull/43852, this is a pretty huge new compiler feature in v1.8, but right now it's manually specified and will take time before things like https://github.com/JuliaLang/julia/pull/44822 land and start to make it more pervasive). When that's all together, LLVM will have more ammo for proving things more effectively (pun intended).

  • Vectorize function calls
    2 projects | /r/Julia | 25 Apr 2022
    This looks nice too. Seems to be maintained and it even has a vmap-function. What more can one ask for ;) https://github.com/JuliaSIMD/LoopVectorization.jl
  • Implementing dedispersion in Julia.
    4 projects | /r/Julia | 16 Mar 2022
    Have you checked out https://github.com/JuliaSIMD/LoopVectorization.jl ? It may be useful for your specific use case
  • We Use Julia, 10 Years Later
    10 projects | news.ycombinator.com | 14 Feb 2022
    And the "how" behind Octavian.jl is basically LoopVectorization.jl [1], which helps make optimal use of your CPU's SIMD instructions.

    Currently there can some nontrivial compilation latency with this approach, but since LV ultimately emits custom LLVM it's actually perfectly compatible with StaticCompiler.jl [2] following Mason's rewrite, so stay tuned on that front.

    [1] https://github.com/JuliaSIMD/LoopVectorization.jl

    [2] https://github.com/tshort/StaticCompiler.jl

  • Why Lisp? (2015)
    21 projects | news.ycombinator.com | 26 Oct 2021
    Yes, and sorry if I also came off as combative here, it was not my intention either. I've used some Common Lisp before I got into Julia (though I never got super proficient with it) and I think it's an excellent language and it's too bad it doesn't get more attention.

    I just wanted to share what I think is cool about julia from a metaprogramming point of view, which I think is actually its greatest strength.

    > here is a hypothetical question that can be asked: would a julia programmer be more powerful if llvm was written in julia? i think the answer is clear that they would be

    Sure, I'd agree it'd be great if LLVM was written in julia. However, I also don't think it's a very high priority because there are all sorts of ways to basically slap LLVM's hands out of the way and say "no, I'll just do this part myself."

    E.g. consider LoopVectorization.jl [1] which is doing some very advanced program transformations that would normally be done at the LLVM (or lower) level. This package is written in pure Julia and is all about bypassing LLVM's pipelines and creating hyper efficient microkernels that are competitive with the handwritten assembly in BLAS systems.

    To your point, yes Chris' life likely would have been easier here if LLVM was written in julia, but also he managed to create this with a lot less man-power in a lot less time than anything like it that I know of, and it's screaming fast so I don't think it was such a huge impediment for him that LLVM wasn't implemented in julia.

    [1] https://github.com/JuliaSIMD/LoopVectorization.jl

  • A Project of One’s Own
    2 projects | news.ycombinator.com | 8 Jun 2021
    He still holds a few land speed records he set with motorcycles he designed and built.

    But I had no real hobbies or passions of my own, other than playing card games.

    It wasn't until my twenties, after I already graduated college with degrees I wasn't interested in and my dad's health failed, that I first tried programming. A decade earlier, my dad was attending the local Linux meetings when away from his machine shop.

    Programming, and especially performance optimization/loop vectorization are now my passion and consume most of my free time (https://github.com/JuliaSIMD/LoopVectorization.jl).

    Hearing all the stories about people starting and getting hooked when they were 11 makes me feel like I lost a dozen years of my life. I had every opportunity, but just didn't take them. If I had children, I would worry for them.

  • When porting numpy code to Julia, is it worth it to keep the code vectorized?
    1 project | /r/Julia | 7 Jun 2021
    Julia will often do SIMD under the hood with either a for loop or a broadcasted version, so you generally shouldn't have to worry about it. But for more advanced cases you can look at https://github.com/JuliaSIMD/LoopVectorization.jl
  • Julia 1.6 Highlights
    9 projects | news.ycombinator.com | 25 Mar 2021
    Very often benchmarks include compilation time of julia, which might be slow. Sometimes they rightfully do so, but often it's really apples and oranges when benchmarking vs C/C++/Rust/Fortran. Julia 1.6 shows compilation time in the `@time f()` macro, but Julia programmers typically use @btime from the BenchmarkTools package to get better timings (e.g. median runtime over n function calls).

    I think it's more interesting to see what people do with the language instead of focusing on microbenchmarks. There's for instance this great package https://github.com/JuliaSIMD/LoopVectorization.jl which exports a simple macro `@avx` which you can stick to loops to vectorize them in ways better than the compiler (=LLVM). It's quite remarkable you can implement this in the language as a package as opposed to having LLVM improve or the julia compiler team figure this out.

    See the docs which kinda read like blog posts: https://juliasimd.github.io/LoopVectorization.jl/stable/

julia

Posts with mentions or reviews of julia. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-03-06.
  • Top Paying Programming Technologies 2024
    19 projects | dev.to | 6 Mar 2024
    34. Julia - $74,963
  • Optimize sgemm on RISC-V platform
    6 projects | news.ycombinator.com | 28 Feb 2024
    I don't believe there is any official documentation on this, but https://github.com/JuliaLang/julia/pull/49430 for example added prefetching to the marking phase of a GC which saw speedups on x86, but not on M1.
  • Dart 3.3
    2 projects | news.ycombinator.com | 15 Feb 2024
    3. dispatch on all the arguments

    the first solution is clean, but people really like dispatch.

    the second makes calling functions in the function call syntax weird, because the first argument is privileged semantically but not syntactically.

    the third makes calling functions in the method call syntax weird because the first argument is privileged syntactically but not semantically.

    the closest things to this i can think of off the top of my head in remotely popular programming languages are: nim, lisp dialects, and julia.

    nim navigates the dispatch conundrum by providing different ways to define free functions for different dispatch-ness. the tutorial gives a good overview: https://nim-lang.org/docs/tut2.html

    lisps of course lack UFCS.

    see here for a discussion on the lack of UFCS in julia: https://github.com/JuliaLang/julia/issues/31779

    so to sum up the answer to the original question: because it's only obvious how to make it nice and tidy like you're wanting if you sacrifice function dispatch, which is ubiquitous for good reason!

  • Julia 1.10 Highlights
    1 project | news.ycombinator.com | 27 Dec 2023
    https://github.com/JuliaLang/julia/blob/release-1.10/NEWS.md
  • Best Programming languages for Data Analysis📊
    4 projects | dev.to | 7 Dec 2023
    Visit official site: https://julialang.org/
  • Potential of the Julia programming language for high energy physics computing
    10 projects | news.ycombinator.com | 4 Dec 2023
    No. It runs natively on ARM.

    julia> versioninfo() Julia Version 1.9.3 Commit bed2cd540a1 (2023-08-24 14:43 UTC) Build Info: Official https://julialang.org/ release

  • Rust std:fs slower than Python
    7 projects | news.ycombinator.com | 29 Nov 2023
    https://github.com/JuliaLang/julia/issues/51086#issuecomment...

    So while this "fixes" the issue, it'll introduce a confusing time delay between you freeing the memory and you observing that in `htop`.

    But according to https://jemalloc.net/jemalloc.3.html you can set `opt.muzzy_decay_ms = 0` to remove the delay.

    Still, the musl author has some reservations against making `jemalloc` the default:

    https://www.openwall.com/lists/musl/2018/04/23/2

    > It's got serious bloat problems, problems with undermining ASLR, and is optimized pretty much only for being as fast as possible without caring how much memory you use.

    With the above-mentioned tunables, this should be mitigated to some extent, but the general "theme" (focusing on e.g. performance vs memory usage) will likely still mean "it's a tradeoff" or "it's no tradeoff, but only if you set tunables to what you need".

  • Eleven strategies for making reproducible research the norm
    1 project | news.ycombinator.com | 25 Nov 2023
    I have asked about Julia's reproducibility story on the Guix mailing list in the past, and at the time Simon Tournier didn't think it was promising. I seem to recall Julia itself didnt have a reproducible build. All I know now is that github issue is still not closed.

    https://github.com/JuliaLang/julia/issues/34753

  • Julia as a unifying end-to-end workflow language on the Frontier exascale system
    5 projects | news.ycombinator.com | 19 Nov 2023
    I don't really know what kind of rebuttal you're looking for, but I will link my HN comments from when this was first posted for some thoughts: https://news.ycombinator.com/item?id=31396861#31398796. As I said, in the linked post, I'm quite skeptical of the business of trying to assess relative buginess of programming in different systems, because that has strong dependencies on what you consider core vs packages and what exactly you're trying to do.

    However, bugs in general suck and we've been thinking a fair bit about what additional tooling the language could provide to help people avoid the classes of bugs that Yuri encountered in the post.

    The biggest class of problems in the blog post, is that it's pretty clear that `@inbounds` (and I will extend this to `@assume_effects`, even though that wasn't around when Yuri wrote his post) is problematic, because it's too hard to write. My proposal for what to do instead is at https://github.com/JuliaLang/julia/pull/50641.

    Another common theme is that while Julia is great at composition, it's not clear what's expected to work and what isn't, because the interfaces are informal and not checked. This is a hard design problem, because it's quite close to the reasons why Julia works well. My current thoughts on that are here: https://github.com/Keno/InterfaceSpecs.jl but there's other proposals also.

  • Getaddrinfo() on glibc calls getenv(), oh boy
    10 projects | news.ycombinator.com | 16 Oct 2023
    Doesn't musl have the same issue? https://github.com/JuliaLang/julia/issues/34726#issuecomment...

    I also wonder about OSX's libc. Newer versions seem to have some sort of locking https://github.com/apple-open-source-mirror/Libc/blob/master...

    but older versions (from 10.9) don't have any lockign: https://github.com/apple-oss-distributions/Libc/blob/Libc-99...

What are some alternatives?

When comparing LoopVectorization.jl and julia you can also consider the following projects:

CUDA.jl - CUDA programming in Julia.

jax - Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more

StaticCompiler.jl - Compiles Julia code to a standalone library (experimental)

NetworkX - Network Analysis in Python

cl-cuda - Cl-cuda is a library to use NVIDIA CUDA in Common Lisp programs.

Lua - Lua is a powerful, efficient, lightweight, embeddable scripting language. It supports procedural programming, object-oriented programming, functional programming, data-driven programming, and data description.

julia-vim - Vim support for Julia.

rust-numpy - PyO3-based Rust bindings of the NumPy C-API

cmu-infix - Updated infix.cl of the CMU AI repository, originally written by Mark Kantrowitz

Numba - NumPy aware dynamic Python compiler using LLVM

bel - An interpreter for Bel, Paul Graham's Lisp language

F# - Please file issues or pull requests here: https://github.com/dotnet/fsharp