raphlinus VS Conference-Acceptance-Rate

Compare raphlinus vs Conference-Acceptance-Rate and see what are their differences.

Conference-Acceptance-Rate

Acceptance rates for the major AI conferences (by lixin4ever)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
raphlinus Conference-Acceptance-Rate
10 6
- 3,798
- -
- 6.0
- 25 days ago
Jupyter Notebook
- MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

raphlinus

Posts with mentions or reviews of raphlinus. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-04-01.
  • Moving from Rust to C++
    4 projects | /r/rust | 1 Apr 2023
    That's surprising, given the legendary sense of humor of the C++ community.
  • Announcing Bezier-rs: computational geometry algorithms for Bézier paths (seeking code review and boolean ops help)
    3 projects | /r/rust | 31 Mar 2023
    I have some ideas on how to do boolean ops, some of which is written up in a blog post issue, and for which I have some code locally. In particular, the parabola estimate seems much more efficient than the usual fat line approach. I also have a sketch of quadratic/quadratic intersection in kurbo#230.
  • The Beauty of Bézier Curves
    1 project | news.ycombinator.com | 3 Jan 2023
    I am. It is relevant to the topic, and I cite it in the (now two year old) outline for the blog post I intend to write on the topic[1].

    [1]: https://github.com/raphlinus/raphlinus.github.io/issues/40

  • Raph’s reflections and wishes for 2023
    3 projects | /r/rust | 31 Dec 2022
    I rewrote that paragraph, as I realize it comes across too contentious. Thanks all for the feedback!
  • Rust GUI library for video playback?
    4 projects | /r/rust | 18 Nov 2022
    To do video properly requires integration with the platform compositor. This is a fiendishly difficult problem, and I know of no serious effort to solve it. I have an outline of a blog post on the topic, and hope to publish it before too long.
  • Parallel curves of cubic Béziers
    3 projects | /r/programming | 10 Sep 2022
    I am working on this problem too, and have an issue on my blog with an outline of the writeup. Stay tuned!
  • The Toxic Culture of Rejection in Computer Science
    2 projects | news.ycombinator.com | 26 Aug 2022
    > PS, is your monoid work online, is it this https://arxiv.org/abs/2205.11659 ? I'm interested in monoids on GPUs.

    Yes, the draft is on arXiv, and I have a blog post in the pipeline[1] explaining it to a more general audience.

    And thanks for the other advice, I'll consider it!

    [1] https://github.com/raphlinus/raphlinus.github.io/issues/66

  • Druid app for public transport data
    9 projects | /r/rust | 26 Aug 2022
    The new xilem async architecture is designed to integrate much more finely with async. I have an outline of a blog post in the queue but am juggling a lot of things right now. Expect to see some updates, but not super soon.
  • Bevy and Dioxus are collaborating on stretch2: a revived UI layout algorithm
    10 projects | /r/rust | 10 May 2022
    This is a hard problem and one of the hardest parts is figuring out scope. I have an upcoming blog post which will touch on some of the issues. In any case I'd love to see some progress on larger ecosystem collaboration.
  • Removing characters from strings faster with AVX-512
    2 projects | news.ycombinator.com | 1 May 2022
    The short answer is no, but the long answer is that this is a very complex tradeoff space. Going forward, we may see more of these types of tasks moving to GPU, but for the moment it is generally not a good choice.

    The GPU is incredible at raw throughput, and this particular problem can actually implemented fairly straightforwardly (it's a stream compaction, which in turn can be expressed in terms of prefix sum). However, where the GPU absolutely falls down is when you want to interleave CPU and GPU computations. To give round numbers, the roundtrip latency is on the order of 100µs, and even aside from that, the memcpy back and forth between host and device memory might actually be slower than just solving the problem on the CPU. So you only win when the strings are very large, again using round numbers about a megabyte.

    Things change if you are able to pipeline a lot of useful computation on the GPU. This is an area of active research (including my own). Aaron Hsu has been doing groundbreaking work implementing an entire compiler on the GPU, and there's more recent work[1], implemented in Futhark, that suggests that that this approach is promising.

    I have a paper in the pipeline that includes an extraordinarily high performance (~12G elements/s) GPU implementation of the parentheses matching problem, which is the heart of parsing. If anyone would like to review a draft and provide comments, please add a comment to the GitHub issue[2] I'm using to track this. It's due very soon and I'm on a tight timeline to get all the measurements done, so actionable suggestions on how to improve the text would be most welcome.

    [1]: https://theses.liacs.nl/pdf/2020-2021-VoetterRobin.pdf

    [2]: https://github.com/raphlinus/raphlinus.github.io/issues/66#i...

Conference-Acceptance-Rate

Posts with mentions or reviews of Conference-Acceptance-Rate. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-07-29.
  • Argonne National Lab is attempting to replicate LK-99
    2 projects | news.ycombinator.com | 29 Jul 2023
    > my claim was that we need some mechanism to decide who gets a position and who doesn't

    I don't disagree. My claim is that the current mechanisms are nearly indistinguishable from noise but we pretend that they are very strong. I have also made the claim that Goodhart's Law is at play and made claims for the mechanisms and evidence of this. I'm not saying "trust me bro," I'm saying "here's my claim, here's the mechanism which I believe explains the claim, and here's my evidence." I'll admit a bit messy because a HN conversation, but this is here.

    What I haven't stated explicitly is the alternative, so I will. The alternative is to do what we already do but just without journals and conferences. Peers still make decisions. You still look at citations, h-indices, and such, even though these are still noisy and problematic.

    I really just have two claims here: 1) journals and conferences are high entropy signals. 2) If a signal has high entropy, it should not be the basis for important decisions.

    If you want to see evidence for #1, then considering you publish in ML, I highly recommend reading several of the MANY papers and blog posts on the NeurIPS consistency experiments (there are 2!). The tldr is "reviewers are good at identifying bad papers, but not good at identifying good papers." In other words: high false negative rate or acceptance is noisy. This is because it is highly subjective and reviewers default to reject. I'd add that they have a incentive to too.

    If you want evidence for #2, I suggest reading any book on signals, information theory, or statistics. It may go under many names, but the claim is not unique and generally uncontested.

    > The stated reasons are the same for different branches of science, while replication crisis is far from that.

    My claim about journals being at the heart of the replication crisis is that they chase novelty. That since we have a publish or perish paradigm (a coupled phenomena), that we're disincentivized from reproducing work. I'm sure you are fully aware that you get no credit for confirming work.

    > (Almost) nobody reads arxiv

    You claimed __aavaa__ was trolling, but are you sure you aren't? Your reference to Ashen is literally proof that people are reading arxiv. Clearly you're on twitter and you're in ML, so I don't know how you don't see that people aren't frequently posting their arxiv papers. My only guess is that you're green and when you started these conferences had a 1 year experiment in which they asked authors to not publically advertise their preprints. But before this (and starting again), this is how papers were communicated. And guess what, it still happened, just behind closed doors.

    And given that you're in ML, I'm not sure how you're not aware that the innovation cycle is faster than the publication cycle. Waiting 4-8 months is too long. Look at CVPR: submission deadline in early November, final decision end of February, conference in late June. That's 4 months to just get Twitter notifications, by authors, about works to read. The listing didn't go live till April, so it is 6 months if you don't have peer networks. Then good luck sorting through this. That's just a list of papers, unsorted and untagged. On top of this, there's still a lot of garbage to sort through.

    > the snr of arxiv is so low... that just by filtering garbage you can get quarter a mil followers

    You and I see this very differently, allow me to explain. CVPR published 2359 papers for 2023 and 2066 for 2022. Numbers from here[0] but you can find similar numbers here[1] if you want to look at other conferences. Yes, conferences provide a curation, but so does Ashen and Twitter. I follow several accounts like Ashen. I also follow several researchers because I want to follow their works. This isn't happening because conferences are telling me who to listen to, this is because I've spent years wading through the noise and have peers that communicate with me what to read and not to read. Additionally I get recommendations from my advisor and his peers, from other members of my lab, from other people in my department, from friends outside my university, as well as emails from {Google,Semantic} Scholar. I don't think it isn't a well known problem that one of the most difficult things in a PhD is to get your network established and figure out how to wade through the noise to know what to read and not to. It's just a lot of fucking noise.

    The SNR is high EVERYWHERE

    And I'd like to add that I'm not necessarily unique in many of my points. Here's Bengio talking about how the pressures create incrementalism[2]. In this interview Hinton says "if you send in a paper that has a radically new idea, there's no chance in hell it will get accepted"[3]. Here's Peter Higgs saying he wouldn't cut it in modern academia[4]. The history of science is littered with people who got rocketed to fame because they spent a long time doing work and were willing to challenge the status quo. __Science necessitates challenging the status quo.__ Mathematics is quite famous for these dark horses actually (e.g. Yitang Zhang, Ramanujan, Galois, Sophie Germain, Grigori Perelman).

    > Citation needed? You could claim the same for science as whole, but self-correction turns to be extremely robust, even if slower than you may want.

    This entirely depends on the latter part. Yeah, things self-correct since eventually works get used in practice and thus held up to the flames. But if you're saying that given infinite time to retract a work it proof that the system is working then I don't think this is an argument that can be had in good faith. It is better to talk about fraud, dishonesty, plagiarism, how a system fails to accept good works, how it discourages innovation, and similar concepts. It is not that useful to discuss that over time a system resolves these problems because within that framework the exact discussion we are having would constitute part of that process, leading us into a self referential argument that is falsely constructed such that you will always be correct.

    There are plenty of long term examples of major failures that took decades to correct, like Semmelweis and Boltzman (both who were driven insane and killed themselves). But if you're looking for short term examples and in the modern era, I'd say that being on HN this past week should have been evidence. In the last 8 days we've had: our second story on Gzip[5], "Fabricated data in research about honesty"[6] (which includes papers that were accepted for over a decade and widely cited), "A forthcoming collapse of the Atlantic meridional overturning circulation"[7] (where braaannigan had the top comment for awhile and many misinterpreted) and "I thought I wanted to be a professor, then I served on a hiring committee"[8] (where the article and comments discuss the metric hacking/Goodhart's Law). Or with more ML context, we can see "Please Commit More Blatant Academic Fraud"[9] where Jacob discusses explicit academic fraud and how prolific it is as well as referencing collusion rings. Or what about last year's CVPR E2V-SDE memes[10] about BLATANT plagiarism? The subsequent revelation of other blatant plagiarisms[11]. Or what about the collusion ring we learned about last year[12]. If you need more evidence, go see retractionwatch or I'll re-reference the NeurIPS consistency experiments (which we should know are not unique to NeurIPS). Also, just talk to senior graduate students and ask them why they are burned out. Especially ask both students who make it and those struggling, who will have very different stories. For even more, I ask that you got to CSRankings.org and create a regression plot for the rank of the universities against the number of faculty. If you even need any more, go read any work discussing improvements on the TPM or any of those discussing fairness in the review process. You need to be careful and ask if you're a beneficiary of the system, if you're a success of the system, a success despite the system, or where you are biased (as well as me). There's plenty out there and they go into far more detail than I can with a HN comment.

    Is that enough citations?

    [0] https://cvpr2023.thecvf.com/Conferences/2023/AcceptedPapers

    [1] https://github.com/lixin4ever/Conference-Acceptance-Rate

    [2] https://yoshuabengio.org/2020/02/26/time-to-rethink-the-publ...

    [3] https://www.wired.com/story/googles-ai-guru-computers-think-...

    [4] https://www.theguardian.com/science/2013/dec/06/peter-higgs-...

    [5] https://news.ycombinator.com/item?id=36921552

    [6] https://news.ycombinator.com/item?id=36907829

    [7] https://news.ycombinator.com/item?id=36864319

    [8] https://news.ycombinator.com/item?id=36825204

    [9] https://jacobbuckman.com/2021-05-29-please-commit-more-blata...

    [10] https://www.youtube.com/watch?v=UCmkpLduptU

    [11] https://www.reddit.com/r/MachineLearning/comments/vjkssf/com...

    [12] https://twitter.com/chriswolfvision/status/15452796423404011...

  • New PhD in the field of AI, what the hell is going on with AI/CV conferences?
    1 project | /r/computervision | 27 Mar 2023
    I believe it is quite common due to the increasing popularity of the field. The acceptance rates are reducing somewhat, while the submission numbers are booming. https://github.com/lixin4ever/Conference-Acceptance-Rate for top conferences
  • The Toxic Culture of Rejection in Computer Science
    2 projects | news.ycombinator.com | 26 Aug 2022
    I saw a list of acceptance rate here: https://github.com/lixin4ever/Conference-Acceptance-Rate.

    Is 18% or so acceptance rate really low, though? Almost 2 in 10 submissions are accepted, and I thought "the top" meant something like 2% or less.

    BTW, is there any resources that catalogs which ideas in papers may work well in industry? As someone outside of academia, I find there are simply too many papers, even from top conferences, for me to consume. It's hard for me to know which paper's ideas can help me or not, and this 18% acceptance rate is not a good enough filter any more.

  • [D] Any independent researchers ever get published/into conferences?
    3 projects | /r/MachineLearning | 29 Apr 2022
    You should check this list and the extended list of conference acceptance rates. They include most top conferences in ML/AI by fields. The second list may be more friendly to new authors.
  • [D] IJCAI 2021 Paper Acceptance Result
    1 project | /r/MachineLearning | 29 Apr 2021
    I wonder if they are consciously trying to make IJCAI "more prestigious" by just rejecting more, its been going down since 2018. I doubt that many more bad papers are going in. IMO IJCAI is already a high tier place, just give us 1 more content page & unlimited references.
  • [D] IJCAI 2021 Paper Reviews
    1 project | /r/MachineLearning | 25 Mar 2021

What are some alternatives?

When comparing raphlinus and Conference-Acceptance-Rate you can also consider the following projects:

sprawl - A high performance Rust-powered layout library [Moved to: https://github.com/DioxusLabs/taffy]

einops - Flexible and powerful tensor operations for readable and reliable code (for pytorch, jax, TF and others)

morphorm - A UI layout engine written in Rust

AI-Conference-Info - Extensive acceptance rates and information of main AI conferences

gtfs_manager - A GUI for viewing and editing GTFS data

Graphite - 2D raster & vector editor that melds traditional layers & tools with a modern node-based, non-destructive, procedural workflow.

maplibre-rs - Experimental Maps for Web, Mobile and Desktop

rusty-dos - A Rust skeleton for an MS-DOS program for IBM compatibles and the PC-98, including some PC-98-specific functionality

dioxus - Fullstack GUI library for web, desktop, mobile, and more.