Conference-Acceptance-Rate VS papers

Compare Conference-Acceptance-Rate vs papers and see what are their differences.

Conference-Acceptance-Rate

Acceptance rates for the major AI conferences (by lixin4ever)

papers

ISO/IEC JTC1 SC22 WG21 paper scheduling and management (by cplusplus)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
Conference-Acceptance-Rate papers
6 85
3,870 596
- 2.5%
6.0 4.2
7 days ago 19 days ago
Jupyter Notebook Perl
MIT License -
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

Conference-Acceptance-Rate

Posts with mentions or reviews of Conference-Acceptance-Rate. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-07-29.
  • Argonne National Lab is attempting to replicate LK-99
    2 projects | news.ycombinator.com | 29 Jul 2023
    > my claim was that we need some mechanism to decide who gets a position and who doesn't

    I don't disagree. My claim is that the current mechanisms are nearly indistinguishable from noise but we pretend that they are very strong. I have also made the claim that Goodhart's Law is at play and made claims for the mechanisms and evidence of this. I'm not saying "trust me bro," I'm saying "here's my claim, here's the mechanism which I believe explains the claim, and here's my evidence." I'll admit a bit messy because a HN conversation, but this is here.

    What I haven't stated explicitly is the alternative, so I will. The alternative is to do what we already do but just without journals and conferences. Peers still make decisions. You still look at citations, h-indices, and such, even though these are still noisy and problematic.

    I really just have two claims here: 1) journals and conferences are high entropy signals. 2) If a signal has high entropy, it should not be the basis for important decisions.

    If you want to see evidence for #1, then considering you publish in ML, I highly recommend reading several of the MANY papers and blog posts on the NeurIPS consistency experiments (there are 2!). The tldr is "reviewers are good at identifying bad papers, but not good at identifying good papers." In other words: high false negative rate or acceptance is noisy. This is because it is highly subjective and reviewers default to reject. I'd add that they have a incentive to too.

    If you want evidence for #2, I suggest reading any book on signals, information theory, or statistics. It may go under many names, but the claim is not unique and generally uncontested.

    > The stated reasons are the same for different branches of science, while replication crisis is far from that.

    My claim about journals being at the heart of the replication crisis is that they chase novelty. That since we have a publish or perish paradigm (a coupled phenomena), that we're disincentivized from reproducing work. I'm sure you are fully aware that you get no credit for confirming work.

    > (Almost) nobody reads arxiv

    You claimed __aavaa__ was trolling, but are you sure you aren't? Your reference to Ashen is literally proof that people are reading arxiv. Clearly you're on twitter and you're in ML, so I don't know how you don't see that people aren't frequently posting their arxiv papers. My only guess is that you're green and when you started these conferences had a 1 year experiment in which they asked authors to not publically advertise their preprints. But before this (and starting again), this is how papers were communicated. And guess what, it still happened, just behind closed doors.

    And given that you're in ML, I'm not sure how you're not aware that the innovation cycle is faster than the publication cycle. Waiting 4-8 months is too long. Look at CVPR: submission deadline in early November, final decision end of February, conference in late June. That's 4 months to just get Twitter notifications, by authors, about works to read. The listing didn't go live till April, so it is 6 months if you don't have peer networks. Then good luck sorting through this. That's just a list of papers, unsorted and untagged. On top of this, there's still a lot of garbage to sort through.

    > the snr of arxiv is so low... that just by filtering garbage you can get quarter a mil followers

    You and I see this very differently, allow me to explain. CVPR published 2359 papers for 2023 and 2066 for 2022. Numbers from here[0] but you can find similar numbers here[1] if you want to look at other conferences. Yes, conferences provide a curation, but so does Ashen and Twitter. I follow several accounts like Ashen. I also follow several researchers because I want to follow their works. This isn't happening because conferences are telling me who to listen to, this is because I've spent years wading through the noise and have peers that communicate with me what to read and not to read. Additionally I get recommendations from my advisor and his peers, from other members of my lab, from other people in my department, from friends outside my university, as well as emails from {Google,Semantic} Scholar. I don't think it isn't a well known problem that one of the most difficult things in a PhD is to get your network established and figure out how to wade through the noise to know what to read and not to. It's just a lot of fucking noise.

    The SNR is high EVERYWHERE

    And I'd like to add that I'm not necessarily unique in many of my points. Here's Bengio talking about how the pressures create incrementalism[2]. In this interview Hinton says "if you send in a paper that has a radically new idea, there's no chance in hell it will get accepted"[3]. Here's Peter Higgs saying he wouldn't cut it in modern academia[4]. The history of science is littered with people who got rocketed to fame because they spent a long time doing work and were willing to challenge the status quo. __Science necessitates challenging the status quo.__ Mathematics is quite famous for these dark horses actually (e.g. Yitang Zhang, Ramanujan, Galois, Sophie Germain, Grigori Perelman).

    > Citation needed? You could claim the same for science as whole, but self-correction turns to be extremely robust, even if slower than you may want.

    This entirely depends on the latter part. Yeah, things self-correct since eventually works get used in practice and thus held up to the flames. But if you're saying that given infinite time to retract a work it proof that the system is working then I don't think this is an argument that can be had in good faith. It is better to talk about fraud, dishonesty, plagiarism, how a system fails to accept good works, how it discourages innovation, and similar concepts. It is not that useful to discuss that over time a system resolves these problems because within that framework the exact discussion we are having would constitute part of that process, leading us into a self referential argument that is falsely constructed such that you will always be correct.

    There are plenty of long term examples of major failures that took decades to correct, like Semmelweis and Boltzman (both who were driven insane and killed themselves). But if you're looking for short term examples and in the modern era, I'd say that being on HN this past week should have been evidence. In the last 8 days we've had: our second story on Gzip[5], "Fabricated data in research about honesty"[6] (which includes papers that were accepted for over a decade and widely cited), "A forthcoming collapse of the Atlantic meridional overturning circulation"[7] (where braaannigan had the top comment for awhile and many misinterpreted) and "I thought I wanted to be a professor, then I served on a hiring committee"[8] (where the article and comments discuss the metric hacking/Goodhart's Law). Or with more ML context, we can see "Please Commit More Blatant Academic Fraud"[9] where Jacob discusses explicit academic fraud and how prolific it is as well as referencing collusion rings. Or what about last year's CVPR E2V-SDE memes[10] about BLATANT plagiarism? The subsequent revelation of other blatant plagiarisms[11]. Or what about the collusion ring we learned about last year[12]. If you need more evidence, go see retractionwatch or I'll re-reference the NeurIPS consistency experiments (which we should know are not unique to NeurIPS). Also, just talk to senior graduate students and ask them why they are burned out. Especially ask both students who make it and those struggling, who will have very different stories. For even more, I ask that you got to CSRankings.org and create a regression plot for the rank of the universities against the number of faculty. If you even need any more, go read any work discussing improvements on the TPM or any of those discussing fairness in the review process. You need to be careful and ask if you're a beneficiary of the system, if you're a success of the system, a success despite the system, or where you are biased (as well as me). There's plenty out there and they go into far more detail than I can with a HN comment.

    Is that enough citations?

    [0] https://cvpr2023.thecvf.com/Conferences/2023/AcceptedPapers

    [1] https://github.com/lixin4ever/Conference-Acceptance-Rate

    [2] https://yoshuabengio.org/2020/02/26/time-to-rethink-the-publ...

    [3] https://www.wired.com/story/googles-ai-guru-computers-think-...

    [4] https://www.theguardian.com/science/2013/dec/06/peter-higgs-...

    [5] https://news.ycombinator.com/item?id=36921552

    [6] https://news.ycombinator.com/item?id=36907829

    [7] https://news.ycombinator.com/item?id=36864319

    [8] https://news.ycombinator.com/item?id=36825204

    [9] https://jacobbuckman.com/2021-05-29-please-commit-more-blata...

    [10] https://www.youtube.com/watch?v=UCmkpLduptU

    [11] https://www.reddit.com/r/MachineLearning/comments/vjkssf/com...

    [12] https://twitter.com/chriswolfvision/status/15452796423404011...

  • New PhD in the field of AI, what the hell is going on with AI/CV conferences?
    1 project | /r/computervision | 27 Mar 2023
    I believe it is quite common due to the increasing popularity of the field. The acceptance rates are reducing somewhat, while the submission numbers are booming. https://github.com/lixin4ever/Conference-Acceptance-Rate for top conferences
  • The Toxic Culture of Rejection in Computer Science
    2 projects | news.ycombinator.com | 26 Aug 2022
    I saw a list of acceptance rate here: https://github.com/lixin4ever/Conference-Acceptance-Rate.

    Is 18% or so acceptance rate really low, though? Almost 2 in 10 submissions are accepted, and I thought "the top" meant something like 2% or less.

    BTW, is there any resources that catalogs which ideas in papers may work well in industry? As someone outside of academia, I find there are simply too many papers, even from top conferences, for me to consume. It's hard for me to know which paper's ideas can help me or not, and this 18% acceptance rate is not a good enough filter any more.

  • [D] Any independent researchers ever get published/into conferences?
    3 projects | /r/MachineLearning | 29 Apr 2022
    You should check this list and the extended list of conference acceptance rates. They include most top conferences in ML/AI by fields. The second list may be more friendly to new authors.
  • [D] IJCAI 2021 Paper Acceptance Result
    1 project | /r/MachineLearning | 29 Apr 2021
    I wonder if they are consciously trying to make IJCAI "more prestigious" by just rejecting more, its been going down since 2018. I doubt that many more bad papers are going in. IMO IJCAI is already a high tier place, just give us 1 more content page & unlimited references.
  • [D] IJCAI 2021 Paper Reviews
    1 project | /r/MachineLearning | 25 Mar 2021

papers

Posts with mentions or reviews of papers. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-05-07.
  • Qt and C++ Trivial Relocation (Part 1)
    3 projects | news.ycombinator.com | 7 May 2024
    It is slowly making its way through the standards committee. https://github.com/cplusplus/papers/issues/43

    The author has a fork of clang and gcc with some pretty impressive speedups, so I’m hopeful! https://lists.isocpp.org/sg14/2024/04/1127.php

  • Learn Modern C++
    6 projects | news.ycombinator.com | 26 Dec 2023
    What's fun is, because everything is decided in papers, we can find out why! https://github.com/cplusplus/papers/issues/884

    Accepted paper here: https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2022/p20...

    > The proposed std::print function improves usability, avoids allocating a temporary std::string object and calling operator<< which performs formatted I/O on text that is already formatted. The number of function calls is reduced to one which, together with std::vformat-like type erasure, results in much smaller binary code (see § 13 Binary code).

    Additionally,

    > Another problem is formatting of Unicode text:

    > std::cout << "Привет, κόσμος!";

    > If the source and execution encoding is UTF-8 this will produce the expected output on most GNU/Linux and macOS systems. Unfortunately on Windows it is almost guaranteed to produce mojibake despite the fact that the system is fully capable of printing Unicode

  • The insanity of compile time programming
    2 projects | /r/cpp_questions | 10 Dec 2023
  • P1673 A free function linear algebra interface based on the BLAS
    1 project | news.ycombinator.com | 23 Nov 2023
  • When will std::linalg make it into a new C++ release?
    1 project | /r/cpp | 14 Sep 2023
    See https://github.com/cplusplus/papers/issues/557
  • C++ Papercuts
    3 projects | news.ycombinator.com | 28 Aug 2023
    Bringing editions to C++ failed, and I am not aware of anyone trying to tackle the issues https://github.com/cplusplus/papers/issues/631

    (I could be wrong though! I follow the committee more than you may guess, but not as much as to think I know everything about what's going on.)

  • Argonne National Lab is attempting to replicate LK-99
    2 projects | news.ycombinator.com | 29 Jul 2023
    GitHub would not be relevant in this respect because:

    * It's owned by a (single) commercial corporation, Microsoft.

    * There is censorship both by content and in some respects by country of origin.

    * The code is closed.

    but otherwise it's an interesting idea.

    The C++ standardization committee uses GitHub to track papers submitted to them, see:

    https://github.com/cplusplus/papers

  • C++23: The Next C++ Standard
    1 project | /r/cpp | 11 Jul 2023
    There was no non-approval. The facility needs more work, and the authors (and the committee) were focusing on getting print/format done first. I hope that the paper will be worked on again in the future. We will be happy to review it once there is a revision (see github for history)
  • What C++ library do you wish existed but hasn’t been created yet?
    18 projects | /r/cpp | 8 Jul 2023
  • 2023-06 Varna ISO C++ Committee Trip Report — First Official C++26 meeting!
    2 projects | /r/cpp | 23 Jun 2023
    For more details on what we did at the 2023-06 Varna meeting, the [GitHub issue](https://github.com/cplusplus/papers/issues/328) associated with the paper has a summary.

What are some alternatives?

When comparing Conference-Acceptance-Rate and papers you can also consider the following projects:

einops - Flexible and powerful tensor operations for readable and reliable code (for pytorch, jax, TF and others)

circle - The compiler is available for download. Get it!

AI-Conference-Info - Extensive acceptance rates and information of main AI conferences

compiler-explorer - Run compilers interactively from your web browser and interact with the assembly

C++ Format - A modern formatting library

LEWG - Project planning for the C++ Library Evolution Working Group

CPM.cmake - 📦 CMake's missing package manager. A small CMake script for setup-free, cross-platform, reproducible dependency management.

tinyformat - Minimal, type safe printf replacement library for C++

FastAD - FastAD is a C++ implementation of automatic differentiation both forward and reverse mode.

rangesnext - ranges features for c+23 ported to C++20

mp11 - C++11 metaprogramming library

foundation.rust-lang.org - website for Rust Foundation