iris
block-recurrent-transformer-py
iris | block-recurrent-transformer-py | |
---|---|---|
8 | 1 | |
756 | - | |
- | - | |
1.9 | - | |
3 months ago | - | |
Python | ||
GNU General Public License v3.0 only | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
iris
-
From Deep to Long Learning
Yea, after all these LLMs are predicting one sequence of tokens from another sequence of tokens and the tokens could be anything, it just "happens" that text has the most knowledge and the easiest to input, then there are image, sound, video, but tokens could also be learned from world experience in RL:
Transformers are Sample-Efficient World Models:
https://github.com/eloialonso/iris#transformers-are-sample-e...
- What is the next booming topic in Deep RL?
-
Most Popular AI Research Sept 2022 - Ranked Based On Total GitHub Stars
Transformers are Sample Efficient World Models https://github.com/eloialonso/iris https://arxiv.org/abs/2209.00588v1
- [D] Most Popular AI Research Sept 2022 - Ranked Based On GitHub Stars
-
Minimal PyTorch re-implementation of GPT
This is actually a pretty neat, self-contained implementation that can super easily extended beyond stereotypical natural language models, for example to create world models for video games [1] or to create robot models that can learn to imitate from large, chaotic human demonstration data [2] (disclaimer, I'm an author on the second one.) Basically, GPT (or minGPT) models are EXCELLENT sequence modelers, almost to the point where you can throw any sensible sequence data at it and hope to get interesting results, as long as you don't overfit.
Even though I have only been working on machine learning for around six years, it's crazy to see how the landscape has changed so fast so recently, including diffusion models and transformers. It's not too much to say that we might expect more major breakthroughs by the end of this decade, and end in a place we can't even imagine right now!
[1] https://github.com/eloialonso/iris
- Transformers are Sample Efficient World Models
- [R] Transformers are Sample Efficient World Models: With the equivalent of only two hours of gameplay in the Atari 100k benchmark, IRIS outperforms humans on 10 out of 26 games and surpasses MuZero.
block-recurrent-transformer-py
-
From Deep to Long Learning
that line of research is still going. https://github.com/lucidrains/block-recurrent-transformer-py... i think it is worth continuing research on both fronts.
What are some alternatives?
setfit - Efficient few-shot learning with Sentence Transformers
block-recurrent-transformer-pytorch - Implementation of Block Recurrent Transformer - Pytorch
Text2Light - [SIGGRAPH Asia 2022] Text2Light: Zero-Shot Text-Driven HDR Panorama Generation
heinsen_routing - Reference implementation of "An Algorithm for Routing Vectors in Sequences" (Heinsen, 2022) and "An Algorithm for Routing Capsules in All Domains" (Heinsen, 2019), for composing deep neural networks.
machine-learning-articles - 🧠💬 Articles I wrote about machine learning, archived from MachineCurve.com.
motion-diffusion-model - The official PyTorch implementation of the paper "Human Motion Diffusion Model"
CSL - [COLING 2022] CSL: A Large-scale Chinese Scientific Literature Dataset 中文科学文献数据集
VToonify - [SIGGRAPH Asia 2022] VToonify: Controllable High-Resolution Portrait Video Style Transfer
storydalle
minGPT - A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training
rliable - [NeurIPS'21 Outstanding Paper] Library for reliable evaluation on RL and ML benchmarks, even with only a handful of seeds.