alphafold2 VS array

Compare alphafold2 vs array and see what are their differences.

alphafold2

To eventually become an unofficial Pytorch implementation / replication of Alphafold2, as details of the architecture get released (by lucidrains)
Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
alphafold2 array
1 5
1,501 189
- -
0.0 6.9
over 1 year ago 5 months ago
Python C++
MIT License Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

alphafold2

Posts with mentions or reviews of alphafold2. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-04-09.

array

Posts with mentions or reviews of array. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-27.
  • Einsum in 40 Lines of Python
    6 projects | news.ycombinator.com | 27 Apr 2024
    I wrote a library in C++ (I know, probably a non-starter for most reading this) that I think does most of what you want, as well as some other requests in this thread (generalized to more than just multiply-add): https://github.com/dsharlet/array?tab=readme-ov-file#einstei....

    A matrix multiply written with this looks like this:

        enum { i = 2, j = 0, k = 1 };
  • Benchmarking 20 programming languages on N-queens and matrix multiplication
    15 projects | news.ycombinator.com | 2 Jan 2024
    I should have mentioned somewhere, I disabled threading for OpenBLAS, so it is comparing one thread to one thread. Parallelism would be easy to add, but I tend to want the thread parallelism outside code like this anyways.

    As for the inner loop not being well optimized... the disassembly looks like the same basic thing as OpenBLAS. There's disassembly in the comments of that file to show what code it generates, I'd love to know what you think is lacking! The only difference between the one I linked and this is prefetching and outer loop ordering: https://github.com/dsharlet/array/blob/master/examples/linea...

  • A basic introduction to NumPy's einsum
    13 projects | news.ycombinator.com | 9 Apr 2022
    If you are looking for something like this in C++, here's my attempt at implementing it: https://github.com/dsharlet/array#einstein-reductions

    It doesn't do any automatic optimization of the loops like some of the projects linked in this thread, but, it provides all the tools needed for humans to express the code in a way that a good compiler can turn it into really good code.

What are some alternatives?

When comparing alphafold2 and array you can also consider the following projects:

einops - Flexible and powerful tensor operations for readable and reliable code (for pytorch, jax, TF and others)

optimizing-the-memory-layout-of-std-tuple - Optimizing the memory layout of std::tuple