RHO-Loss
By OATML
flash-attention-jax
Implementation of Flash Attention in Jax (by lucidrains)
Our great sponsors
RHO-Loss | flash-attention-jax | |
---|---|---|
1 | 1 | |
143 | 83 | |
3.5% | - | |
5.4 | 8.1 | |
6 months ago | 5 months ago | |
Python | Python | |
Apache License 2.0 | MIT License |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
RHO-Loss
Posts with mentions or reviews of RHO-Loss.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2022-08-14.
-
[D] Most important AI Paper´s this year so far in my opinion + Proto AGI speculation at the end
RHO-LOSS - Prioritized Training on Points that are Learnable, Worth Learning, and Not Yet Learnt - Trains Models 18x faster with higher accuracy Paper: https://arxiv.org/abs/2206.07137 Github: https://github.com/OATML/RHO-Loss
flash-attention-jax
Posts with mentions or reviews of flash-attention-jax.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2022-08-14.
-
[D] Most important AI Paper´s this year so far in my opinion + Proto AGI speculation at the end
FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness Paper: https://arxiv.org/abs/2205.14135 Github: https://github.com/HazyResearch/flash-attention and https://github.com/lucidrains/flash-attention-jax
What are some alternatives?
When comparing RHO-Loss and flash-attention-jax you can also consider the following projects:
EfficientZero - Open-source codebase for EfficientZero, from "Mastering Atari Games with Limited Data" at NeurIPS 2021.
flash-attention - Fast and memory-efficient exact attention
CodeRL - This is the official code for the paper CodeRL: Mastering Code Generation through Pretrained Models and Deep Reinforcement Learning (NeurIPS22).
msn - Masked Siamese Networks for Label-Efficient Learning (https://arxiv.org/abs/2204.07141)
perceiver-ar
XMem - [ECCV 2022] XMem: Long-Term Video Object Segmentation with an Atkinson-Shiffrin Memory Model
google-research - Google Research
DeepSpeed - DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.