einops VS Conference-Acceptance-Rate

Compare einops vs Conference-Acceptance-Rate and see what are their differences.

einops

Flexible and powerful tensor operations for readable and reliable code (for pytorch, jax, TF and others) (by arogozhnikov)

Conference-Acceptance-Rate

Acceptance rates for the major AI conferences (by lixin4ever)
Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
einops Conference-Acceptance-Rate
19 6
7,916 3,798
- -
7.4 6.0
11 days ago 22 days ago
Python Jupyter Notebook
MIT License MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

einops

Posts with mentions or reviews of einops. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-27.

Conference-Acceptance-Rate

Posts with mentions or reviews of Conference-Acceptance-Rate. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-07-29.
  • Argonne National Lab is attempting to replicate LK-99
    2 projects | news.ycombinator.com | 29 Jul 2023
    > my claim was that we need some mechanism to decide who gets a position and who doesn't

    I don't disagree. My claim is that the current mechanisms are nearly indistinguishable from noise but we pretend that they are very strong. I have also made the claim that Goodhart's Law is at play and made claims for the mechanisms and evidence of this. I'm not saying "trust me bro," I'm saying "here's my claim, here's the mechanism which I believe explains the claim, and here's my evidence." I'll admit a bit messy because a HN conversation, but this is here.

    What I haven't stated explicitly is the alternative, so I will. The alternative is to do what we already do but just without journals and conferences. Peers still make decisions. You still look at citations, h-indices, and such, even though these are still noisy and problematic.

    I really just have two claims here: 1) journals and conferences are high entropy signals. 2) If a signal has high entropy, it should not be the basis for important decisions.

    If you want to see evidence for #1, then considering you publish in ML, I highly recommend reading several of the MANY papers and blog posts on the NeurIPS consistency experiments (there are 2!). The tldr is "reviewers are good at identifying bad papers, but not good at identifying good papers." In other words: high false negative rate or acceptance is noisy. This is because it is highly subjective and reviewers default to reject. I'd add that they have a incentive to too.

    If you want evidence for #2, I suggest reading any book on signals, information theory, or statistics. It may go under many names, but the claim is not unique and generally uncontested.

    > The stated reasons are the same for different branches of science, while replication crisis is far from that.

    My claim about journals being at the heart of the replication crisis is that they chase novelty. That since we have a publish or perish paradigm (a coupled phenomena), that we're disincentivized from reproducing work. I'm sure you are fully aware that you get no credit for confirming work.

    > (Almost) nobody reads arxiv

    You claimed __aavaa__ was trolling, but are you sure you aren't? Your reference to Ashen is literally proof that people are reading arxiv. Clearly you're on twitter and you're in ML, so I don't know how you don't see that people aren't frequently posting their arxiv papers. My only guess is that you're green and when you started these conferences had a 1 year experiment in which they asked authors to not publically advertise their preprints. But before this (and starting again), this is how papers were communicated. And guess what, it still happened, just behind closed doors.

    And given that you're in ML, I'm not sure how you're not aware that the innovation cycle is faster than the publication cycle. Waiting 4-8 months is too long. Look at CVPR: submission deadline in early November, final decision end of February, conference in late June. That's 4 months to just get Twitter notifications, by authors, about works to read. The listing didn't go live till April, so it is 6 months if you don't have peer networks. Then good luck sorting through this. That's just a list of papers, unsorted and untagged. On top of this, there's still a lot of garbage to sort through.

    > the snr of arxiv is so low... that just by filtering garbage you can get quarter a mil followers

    You and I see this very differently, allow me to explain. CVPR published 2359 papers for 2023 and 2066 for 2022. Numbers from here[0] but you can find similar numbers here[1] if you want to look at other conferences. Yes, conferences provide a curation, but so does Ashen and Twitter. I follow several accounts like Ashen. I also follow several researchers because I want to follow their works. This isn't happening because conferences are telling me who to listen to, this is because I've spent years wading through the noise and have peers that communicate with me what to read and not to read. Additionally I get recommendations from my advisor and his peers, from other members of my lab, from other people in my department, from friends outside my university, as well as emails from {Google,Semantic} Scholar. I don't think it isn't a well known problem that one of the most difficult things in a PhD is to get your network established and figure out how to wade through the noise to know what to read and not to. It's just a lot of fucking noise.

    The SNR is high EVERYWHERE

    And I'd like to add that I'm not necessarily unique in many of my points. Here's Bengio talking about how the pressures create incrementalism[2]. In this interview Hinton says "if you send in a paper that has a radically new idea, there's no chance in hell it will get accepted"[3]. Here's Peter Higgs saying he wouldn't cut it in modern academia[4]. The history of science is littered with people who got rocketed to fame because they spent a long time doing work and were willing to challenge the status quo. __Science necessitates challenging the status quo.__ Mathematics is quite famous for these dark horses actually (e.g. Yitang Zhang, Ramanujan, Galois, Sophie Germain, Grigori Perelman).

    > Citation needed? You could claim the same for science as whole, but self-correction turns to be extremely robust, even if slower than you may want.

    This entirely depends on the latter part. Yeah, things self-correct since eventually works get used in practice and thus held up to the flames. But if you're saying that given infinite time to retract a work it proof that the system is working then I don't think this is an argument that can be had in good faith. It is better to talk about fraud, dishonesty, plagiarism, how a system fails to accept good works, how it discourages innovation, and similar concepts. It is not that useful to discuss that over time a system resolves these problems because within that framework the exact discussion we are having would constitute part of that process, leading us into a self referential argument that is falsely constructed such that you will always be correct.

    There are plenty of long term examples of major failures that took decades to correct, like Semmelweis and Boltzman (both who were driven insane and killed themselves). But if you're looking for short term examples and in the modern era, I'd say that being on HN this past week should have been evidence. In the last 8 days we've had: our second story on Gzip[5], "Fabricated data in research about honesty"[6] (which includes papers that were accepted for over a decade and widely cited), "A forthcoming collapse of the Atlantic meridional overturning circulation"[7] (where braaannigan had the top comment for awhile and many misinterpreted) and "I thought I wanted to be a professor, then I served on a hiring committee"[8] (where the article and comments discuss the metric hacking/Goodhart's Law). Or with more ML context, we can see "Please Commit More Blatant Academic Fraud"[9] where Jacob discusses explicit academic fraud and how prolific it is as well as referencing collusion rings. Or what about last year's CVPR E2V-SDE memes[10] about BLATANT plagiarism? The subsequent revelation of other blatant plagiarisms[11]. Or what about the collusion ring we learned about last year[12]. If you need more evidence, go see retractionwatch or I'll re-reference the NeurIPS consistency experiments (which we should know are not unique to NeurIPS). Also, just talk to senior graduate students and ask them why they are burned out. Especially ask both students who make it and those struggling, who will have very different stories. For even more, I ask that you got to CSRankings.org and create a regression plot for the rank of the universities against the number of faculty. If you even need any more, go read any work discussing improvements on the TPM or any of those discussing fairness in the review process. You need to be careful and ask if you're a beneficiary of the system, if you're a success of the system, a success despite the system, or where you are biased (as well as me). There's plenty out there and they go into far more detail than I can with a HN comment.

    Is that enough citations?

    [0] https://cvpr2023.thecvf.com/Conferences/2023/AcceptedPapers

    [1] https://github.com/lixin4ever/Conference-Acceptance-Rate

    [2] https://yoshuabengio.org/2020/02/26/time-to-rethink-the-publ...

    [3] https://www.wired.com/story/googles-ai-guru-computers-think-...

    [4] https://www.theguardian.com/science/2013/dec/06/peter-higgs-...

    [5] https://news.ycombinator.com/item?id=36921552

    [6] https://news.ycombinator.com/item?id=36907829

    [7] https://news.ycombinator.com/item?id=36864319

    [8] https://news.ycombinator.com/item?id=36825204

    [9] https://jacobbuckman.com/2021-05-29-please-commit-more-blata...

    [10] https://www.youtube.com/watch?v=UCmkpLduptU

    [11] https://www.reddit.com/r/MachineLearning/comments/vjkssf/com...

    [12] https://twitter.com/chriswolfvision/status/15452796423404011...

  • New PhD in the field of AI, what the hell is going on with AI/CV conferences?
    1 project | /r/computervision | 27 Mar 2023
    I believe it is quite common due to the increasing popularity of the field. The acceptance rates are reducing somewhat, while the submission numbers are booming. https://github.com/lixin4ever/Conference-Acceptance-Rate for top conferences
  • The Toxic Culture of Rejection in Computer Science
    2 projects | news.ycombinator.com | 26 Aug 2022
    I saw a list of acceptance rate here: https://github.com/lixin4ever/Conference-Acceptance-Rate.

    Is 18% or so acceptance rate really low, though? Almost 2 in 10 submissions are accepted, and I thought "the top" meant something like 2% or less.

    BTW, is there any resources that catalogs which ideas in papers may work well in industry? As someone outside of academia, I find there are simply too many papers, even from top conferences, for me to consume. It's hard for me to know which paper's ideas can help me or not, and this 18% acceptance rate is not a good enough filter any more.

  • [D] Any independent researchers ever get published/into conferences?
    3 projects | /r/MachineLearning | 29 Apr 2022
    You should check this list and the extended list of conference acceptance rates. They include most top conferences in ML/AI by fields. The second list may be more friendly to new authors.
  • [D] IJCAI 2021 Paper Acceptance Result
    1 project | /r/MachineLearning | 29 Apr 2021
    I wonder if they are consciously trying to make IJCAI "more prestigious" by just rejecting more, its been going down since 2018. I doubt that many more bad papers are going in. IMO IJCAI is already a high tier place, just give us 1 more content page & unlimited references.
  • [D] IJCAI 2021 Paper Reviews
    1 project | /r/MachineLearning | 25 Mar 2021

What are some alternatives?

When comparing einops and Conference-Acceptance-Rate you can also consider the following projects:

extending-jax - Extending JAX with custom C++ and CUDA code

AI-Conference-Info - Extensive acceptance rates and information of main AI conferences

opt_einsum - ⚡️Optimizing einsum functions in NumPy, Tensorflow, Dask, and more with contraction order optimization.

kymatio - Wavelet scattering transforms in Python with GPU acceleration

d2l-en - Interactive deep learning book with multi-framework code, math, and discussions. Adopted at 500 universities from 70 countries including Stanford, MIT, Harvard, and Cambridge.

data-science-ipython-notebooks - Data science Python notebooks: Deep learning (TensorFlow, Theano, Caffe, Keras), scikit-learn, Kaggle, big data (Spark, Hadoop MapReduce, HDFS), matplotlib, pandas, NumPy, SciPy, Python essentials, AWS, and various command lines.

horovod - Distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.

3d-ken-burns - an implementation of 3D Ken Burns Effect from a Single Image using PyTorch

jaxopt - Hardware accelerated, batchable and differentiable optimizers in JAX.

best-of-ml-python - 🏆 A ranked list of awesome machine learning Python libraries. Updated weekly.

numpyro - Probabilistic programming with NumPy powered by JAX for autograd and JIT compilation to GPU/TPU/CPU.

deepo - Setup and customize deep learning environment in seconds.