

-
RWKV-LM
RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable). We are at RWKV-7 "Goose". So it's combining the best of RNN and transformer - great performance, linear time, constant space (no kv-cache), fast training, infinite ctx_len, and free sentence embedding.
This is what RWKV (https://github.com/BlinkDL/RWKV-LM) was made for, and what it will be good at.
Wow. Pretty darn cool! <3 :'))))
-
CodeRabbit
CodeRabbit: AI Code Reviews for Developers. Revolutionize your code reviews with AI. CodeRabbit offers PR summaries, code walkthroughs, 1-click suggestions, and AST-based analysis. Boost productivity and code quality across all major languages with each PR.
-
SimpleReinforcementLearning
A demonstration of table based, SARSA reinforcement learning for a simple cat/mouse game
+1 you beat me to the punch! I think its helpful to start with simple RL and ignore the "deep" part to get the basics. The first several lectures in this series do that well. It helped me build a simple "cat and mouse" RL simulation https://github.com/gtoubassi/SimpleReinforcementLearning and ultimately a reproduction of the DQN atari game playing agent: https://github.com/gtoubassi/dqn-atari.
-
dqn-atari
A TensorFlow based implementation of the DeepMind Atari playing "Deep Q Learning" agent that works reasonably well (by gtoubassi)
+1 you beat me to the punch! I think its helpful to start with simple RL and ignore the "deep" part to get the basics. The first several lectures in this series do that well. It helped me build a simple "cat and mouse" RL simulation https://github.com/gtoubassi/SimpleReinforcementLearning and ultimately a reproduction of the DQN atari game playing agent: https://github.com/gtoubassi/dqn-atari.
Related posts
-
Ask HN: Is anybody building an alternative transformer?
-
Do LLMs need a context window?
-
Paving the way to efficient architectures: StripedHyena-7B
-
Understanding Deep Learning
-
"If you see a startup claiming to possess top-secret results leading to human level AI, they're lying or delusional. Don't believe them!" - Yann LeCun, on the conspiracy theories of "X company has reached AGI in secret"