LoopVectorization.jl VS cmu-infix

Compare LoopVectorization.jl vs cmu-infix and see what are their differences.

cmu-infix

Updated infix.cl of the CMU AI repository, originally written by Mark Kantrowitz (by quil-lang)
Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
LoopVectorization.jl cmu-infix
10 4
720 32
0.0% -
7.6 0.0
2 days ago about 7 years ago
Julia Common Lisp
MIT License GNU General Public License v3.0 or later
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

LoopVectorization.jl

Posts with mentions or reviews of LoopVectorization.jl. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-05-02.
  • Mojo – a new programming language for all AI developers
    7 projects | news.ycombinator.com | 2 May 2023
    It is a little disappointing that they're setting the bar against vanilla Python in their comparisons. While I'm sure they have put massive engineering effort into their ML compiler, the demos they showed of matmul are not that impressive in an absolute sense; with the analogous Julia code, making use of [LoopVectorization.jl](https://github.com/JuliaSIMD/LoopVectorization.jl) to automatically choose good defaults for vectorization, etc...

    ```

  • Knight’s Landing: Atom with AVX-512
    1 project | news.ycombinator.com | 10 Dec 2022
  • Python 3.11 is 25% faster than 3.10 on average
    13 projects | news.ycombinator.com | 6 Jul 2022
    > My mistake in retrospect was using small arrays as part of a struct, which being immutable got replaced at each time step with a new struct requiring new arrays to be allocated and initialized. I would not have done that in c++, but julia puts my brain in matlab mode.

    I see. Yes, it's an interesting design space where Julia makes both heap and stack allocations easy enough, so sometimes you just reach for the heap like in MATLAB mode. Hopefully Prem and Shuhei's work lands soon enough to stack allocate small non-escaping arrays so that user's done need to think about this.

    > Alignment I'd assumed, but padding the struct instead of the tuple did nothing, so probably extra work to clear a piece of an simd load. Any insight on why avx availability didn't help would be appreciated. I did verify some avx instructions were in the asm it generated, so it knew, it just didn't use.

    The major differences at this point seem to come down to GCC (g++) vs LLVM and proofs of aliasing. LLVM's auto-vectorizer isn't that great, and it seems to be able to prove 2 arrays are not aliasing less reliably. For the first part, some people have just improved the loop analysis code from the Julia side (https://github.com/JuliaSIMD/LoopVectorization.jl), forcing SIMD onto LLVM can help it make the right choices. But for the second part you do need to do `@simd ivdep for ...` (or use LoopVectorization.jl) to match some C++ examples. This is hopefully one of the things that the JET.jl and other new analysis passes can help with, along with the new effects system (see https://github.com/JuliaLang/julia/pull/43852, this is a pretty huge new compiler feature in v1.8, but right now it's manually specified and will take time before things like https://github.com/JuliaLang/julia/pull/44822 land and start to make it more pervasive). When that's all together, LLVM will have more ammo for proving things more effectively (pun intended).

  • Vectorize function calls
    2 projects | /r/Julia | 25 Apr 2022
    This looks nice too. Seems to be maintained and it even has a vmap-function. What more can one ask for ;) https://github.com/JuliaSIMD/LoopVectorization.jl
  • Implementing dedispersion in Julia.
    4 projects | /r/Julia | 16 Mar 2022
    Have you checked out https://github.com/JuliaSIMD/LoopVectorization.jl ? It may be useful for your specific use case
  • We Use Julia, 10 Years Later
    10 projects | news.ycombinator.com | 14 Feb 2022
    And the "how" behind Octavian.jl is basically LoopVectorization.jl [1], which helps make optimal use of your CPU's SIMD instructions.

    Currently there can some nontrivial compilation latency with this approach, but since LV ultimately emits custom LLVM it's actually perfectly compatible with StaticCompiler.jl [2] following Mason's rewrite, so stay tuned on that front.

    [1] https://github.com/JuliaSIMD/LoopVectorization.jl

    [2] https://github.com/tshort/StaticCompiler.jl

  • Why Lisp? (2015)
    21 projects | news.ycombinator.com | 26 Oct 2021
    Yes, and sorry if I also came off as combative here, it was not my intention either. I've used some Common Lisp before I got into Julia (though I never got super proficient with it) and I think it's an excellent language and it's too bad it doesn't get more attention.

    I just wanted to share what I think is cool about julia from a metaprogramming point of view, which I think is actually its greatest strength.

    > here is a hypothetical question that can be asked: would a julia programmer be more powerful if llvm was written in julia? i think the answer is clear that they would be

    Sure, I'd agree it'd be great if LLVM was written in julia. However, I also don't think it's a very high priority because there are all sorts of ways to basically slap LLVM's hands out of the way and say "no, I'll just do this part myself."

    E.g. consider LoopVectorization.jl [1] which is doing some very advanced program transformations that would normally be done at the LLVM (or lower) level. This package is written in pure Julia and is all about bypassing LLVM's pipelines and creating hyper efficient microkernels that are competitive with the handwritten assembly in BLAS systems.

    To your point, yes Chris' life likely would have been easier here if LLVM was written in julia, but also he managed to create this with a lot less man-power in a lot less time than anything like it that I know of, and it's screaming fast so I don't think it was such a huge impediment for him that LLVM wasn't implemented in julia.

    [1] https://github.com/JuliaSIMD/LoopVectorization.jl

  • A Project of One’s Own
    2 projects | news.ycombinator.com | 8 Jun 2021
    He still holds a few land speed records he set with motorcycles he designed and built.

    But I had no real hobbies or passions of my own, other than playing card games.

    It wasn't until my twenties, after I already graduated college with degrees I wasn't interested in and my dad's health failed, that I first tried programming. A decade earlier, my dad was attending the local Linux meetings when away from his machine shop.

    Programming, and especially performance optimization/loop vectorization are now my passion and consume most of my free time (https://github.com/JuliaSIMD/LoopVectorization.jl).

    Hearing all the stories about people starting and getting hooked when they were 11 makes me feel like I lost a dozen years of my life. I had every opportunity, but just didn't take them. If I had children, I would worry for them.

  • When porting numpy code to Julia, is it worth it to keep the code vectorized?
    1 project | /r/Julia | 7 Jun 2021
    Julia will often do SIMD under the hood with either a for loop or a broadcasted version, so you generally shouldn't have to worry about it. But for more advanced cases you can look at https://github.com/JuliaSIMD/LoopVectorization.jl
  • Julia 1.6 Highlights
    9 projects | news.ycombinator.com | 25 Mar 2021
    Very often benchmarks include compilation time of julia, which might be slow. Sometimes they rightfully do so, but often it's really apples and oranges when benchmarking vs C/C++/Rust/Fortran. Julia 1.6 shows compilation time in the `@time f()` macro, but Julia programmers typically use @btime from the BenchmarkTools package to get better timings (e.g. median runtime over n function calls).

    I think it's more interesting to see what people do with the language instead of focusing on microbenchmarks. There's for instance this great package https://github.com/JuliaSIMD/LoopVectorization.jl which exports a simple macro `@avx` which you can stick to loops to vectorize them in ways better than the compiler (=LLVM). It's quite remarkable you can implement this in the language as a package as opposed to having LLVM improve or the julia compiler team figure this out.

    See the docs which kinda read like blog posts: https://juliasimd.github.io/LoopVectorization.jl/stable/

cmu-infix

Posts with mentions or reviews of cmu-infix. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-11-08.
  • From Common Lisp to Julia
    6 projects | news.ycombinator.com | 8 Nov 2022
    Fortunately doing infix math in CL has since the 90s been one small library include away: https://github.com/quil-lang/cmu-infix
  • Failing to Learn Zig via Advent of Code
    17 projects | news.ycombinator.com | 17 Jan 2022
    The Lisp version can also be more readable with a macro (like https://github.com/quil-lang/cmu-infix): #I(a(1.0-t) + bt). Or something else that would let you write GP's preferred syntax. One of the things that makes Lisp Lisp is that if the parens are over-cumbersome, you have the tools to take them away. See also CL:LOOP.
  • Why Lisp? (2015)
    21 projects | news.ycombinator.com | 26 Oct 2021
    (The list of forms are passed, unevaluated and at compile time, to nest, which rewrites them using a right fold to nest things properly.)

    Somewhat similar is the arrow macro that Clojure popularized, which lets you get rid of (deep (nesting (like (this ...)))) where you have to remember evaluation order is inside-out and replace it with a flatter (-> (this ...) like nesting deep). Its implementation is also easy -- many macros are easy to write because Lisp's source code is itself a list data structure for which you can write code to process and manipulate just like any other lists.

    Another cool macro that's been around since 1993 is https://github.com/quil-lang/cmu-infix which lets you write math in infix style, e.g. #I( C[i, k] += A[i, j] * B[j, k] ) where A, B, and C are all matrices represented as 2D arrays. It's a lot more complicated than the nest macro, though.

    There are some other things that still make Lisp great in comparison to other languages, but they don't exactly have one-line code examples like [::-1] and so I'll just describe them qualitatively. Common Lisp has CLOS, the first standardized OOP system. It's a lot more powerful than C++'s system. It differs from many systems in that classes and methods are separate; among other things this gives you multiple dispatch (you can define polymorphic methods that don't just dispatch to different code depending on the first argument (the explicit 'self' in Python, implicit 'this' in other langs) but all arguments). One thing it can be useful for is to get rid of many laborious uses of the Builder and Visitor patterns. e.g. the need for double dispatch is a common reason to use the Visitor pattern, but in Lisp there's no need. CLOS also does "method combination", which lets you define :before, :after, and :around methods that operate implicitly before/after/around a call. This gets rid of the Observer pattern, supports design-by-contract, and jives well with multiple inheritance in that you can create "mixins" that classes can "inherit" from with the only behavior being some :before/:after methods. (e.g. logging, or cleaning up resources, or validation.)

    Everything is truly dynamic -- a class can even change its type at runtime, which may be an acceptable solution to the circle-ellipse problem, or just super convenient while developing. More fundamentally, "compile" is a built-in function, not something you have to do with a separate program. "Disassemble" is built-in, too, so you can see what the compiler is doing and how optimized something is. You have full flexibility to define and redefine how your program works as it's running, no need to restart and lose state if you don't want to. Besides being killer for development (and all the differences in development experience comprise a big part of why I still think Lisp is great compared to non-Lisp), this gives you a powerful way to do production debugging and hot-fixing too -- a footgun you might not necessarily want most of the time, but you don't have to do anything special for it when you do want it. It can be very useful, e.g. if you've got a spacecraft 100 million miles from Earth https://flownet.com/gat/jpl-lisp.html I've also put some hobby stuff on a server, just deployed as a single binary, but built so that if I want to change it, I can either stop it, replace the binary, and start again, or just SSH in and with SSH forwarding connect to the live program with my editor and load the new code changes just like I would when developing locally, and thus have zero downtime.

    Lastly, Lisp's solution to error handling goes beyond traditional exception handling. Again this ties into the development experience -- you have some compile-time warnings depending on the implementation (e.g. typos, undefined functions, bad types) but you'll hit runtime errors eventually, Lisp provides the condition system to help deal with them. It can be used for signaling non-errors, which has its uses, but what you'll see first are probably unhandled errors. By default one will drop you into a debugger where the error occurred, the stack isn't immediately unwound. Here you can do whatever -- inspect/change variables on different stack frame levels, recompile code if there's a way to fix things, restart computation at a specific frame... You'll also be given the option of "restarts", which might include just an "abort" that unwinds to the top level (possibly ending a thread) but can include custom actions as well that could resolve the error in different ways. For example, if you're parsing a CSV file and hit a value that is wrong somehow (empty, bad type, illegal value, bad word, whatever), your restarts might be to provide your own value or some default value (which will be used, and the computation resumes to parse the next value in the row), or skip the whole row (moving on to the next one), or skip the whole file (moving on to the next file, or finishing). Again this can be very useful while debugging, but in production you can either program in default resolutions (and a catch-all handler that logs unhandled errors, as usual) or give the choice to the user (in a friendlier way than exposing the debugger if you please).

  • An Intuition for Lisp Syntax
    4 projects | news.ycombinator.com | 27 May 2021
    You don't have to give up on anything, that's the beauty of Lisp. Here's a library from 1993: https://github.com/quil-lang/cmu-infix

    Though personally I don't particularly find (+ 1 2 3 4 5) less readable than 1+2+3+4+5, and since most of my programs don't have math expressions much more complicated than that, even without cmu-infix I'd find the rest of the tradeoffs worth it, much like once I thought despite Python not having i++ or ++i it was still worthwhile. (In Lisp, by the way, one would use (incf i).)

What are some alternatives?

When comparing LoopVectorization.jl and cmu-infix you can also consider the following projects:

CUDA.jl - CUDA programming in Julia.

cl4py - Common Lisp for Python

julia - The Julia Programming Language

trivia - Pattern Matcher Compatible with Optima

StaticCompiler.jl - Compiles Julia code to a standalone library (experimental)

fructure - a structured interaction engine 🗜️ ⚗️

cl-cuda - Cl-cuda is a library to use NVIDIA CUDA in Common Lisp programs.

LIBUCL - Universal configuration library parser

julia-vim - Vim support for Julia.

janet - A dynamic language and bytecode vm

bel - An interpreter for Bel, Paul Graham's Lisp language

zig - General-purpose programming language and toolchain for maintaining robust, optimal, and reusable software.