bet VS returnn-experiments

Compare bet vs returnn-experiments and see what are their differences.

bet

Code and website for Behavior Transformers: Cloning k modes with one stone. (by notmahi)
Scout Monitoring - Free Django app performance insights with Scout Monitoring
Get Scout setup in minutes, and let us sweat the small stuff. A couple lines in settings.py is all you need to start monitoring your apps. Sign up for our free tier today.
www.scoutapm.com
featured
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
bet returnn-experiments
3 2
94 152
- 0.0%
2.1 6.0
about 1 year ago 3 days ago
Python Python
MIT License -
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

bet

Posts with mentions or reviews of bet. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-09-06.
  • Dobb·E: An open-source framework for learning household robotic manipulation
    1 project | news.ycombinator.com | 29 Nov 2023
    Indeed! In fact, I have a project [0] from last year that uses a GPT-style transformer to address that exact issue :) However, it’s hard to go far outside simulations in real home robotics without a good platform, out of which efforts came Dobb-E.

    [0] https://mahis.life/bet/

  • Minimal PyTorch re-implementation of GPT
    6 projects | news.ycombinator.com | 6 Sep 2022
  • Show HN: We trained a (mini) GPT for multi-modal robot behaviors
    1 project | news.ycombinator.com | 24 Jun 2022
    Hi HN!

    First author of the paper here, thought some of you may enjoy reading about this! Even now, training robots on human demonstration data is the best way to get them to do new and exciting things in the real world. However, this generally requires a lot of data curation in the standard way: the robots can only follow along if you give them data that is solving a single task in a single way.

    To improve the status quo, we introduce Behavior Transformer in this paper, which can learn from unlabeled demonstration data solving multiple different tasks in different ways using a GPT-like generator model. We had to make some modifications to fit the continuous actions, unlike the standard GPT model which fits discrete words.

    As it turns out, unconditional rollouts from this model shows a lot more "natural" behavior (i.e. different tasks solved in different rollouts in different ways)_than standard behavioral cloning. More importantly, behavior transformers show much better mode coverage compared to previous models, and show some level of compositionality. Check out our videos! [1]

    Finally, another oft-ignored part I am quite proud of is our code release -- we worked quite hard to make sure our code [2] is easy to read, reproduce, and remix! And also, did I tell you that these models train super fast? The Franka Kitchen environment in the top video [3] takes just 10 minutes on an Nvidia 3080 to the point you are seeing in the video. Compare that with standard RL training, and you might agree with me that a small number of demonstrations can truly go a long way!

    Happy to answer questions, as well! Have a great Friday, wherever you are :)

    [1] https://mahis.life/bet

    [2] https://github.com/notmahi/bet

    [3] https://mahis.life/bet/more/kitchen/

returnn-experiments

Posts with mentions or reviews of returnn-experiments. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-01-29.
  • Show HN: WhisperFusion – Ultra-low latency conversations with an AI chatbot
    7 projects | news.ycombinator.com | 29 Jan 2024
    The code is all released already. You find it here: https://github.com/rwth-i6/returnn-experiments/tree/master/2...

    This is TensorFlow-based. But I also have another PyTorch-based implementation already, also public (inside our other repo, i6_experiments). It's not so easy currently to set this up, but I'm working on a simpler pipeline in PyTorch.

    We don't have the models online yet, but we can upload them later. But I'm not sure how useful they are outside of research, as they are specifically for those research tasks (Librispeech, Tedlium), and probably don't perform too well on other data.

  • Minimal PyTorch re-implementation of GPT
    6 projects | news.ycombinator.com | 6 Sep 2022
    This works for an architecture which has been well tuned and studied before, like LSTM or Transformer.

    Once you do research on the model, testing out things, it often tends to become such kwarg monster in many frameworks.

    Having everything (relevant) in one file (even in the config file itself with hyper params) allows you to copy the file for every experiment and modify it inplace. This avoids the kwargs mess. But then the config files are very complex, and can become messy in other ways (esp for research projects). Example: https://github.com/rwth-i6/returnn-experiments/blob/master/2...

    Such approach makes it much more flexible and does not mess with the baseline code. As you say, it's more like an evolutionary DNA-like approach, where you then tend to do crossovers with other evolved good-performing configs, etc.

What are some alternatives?

When comparing bet and returnn-experiments you can also consider the following projects:

iris - Transformers are Sample-Efficient World Models. ICLR 2023, notable top 5%.

minGPT - A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training

WhisperFusion - WhisperFusion builds upon the capabilities of WhisperLive and WhisperSpeech to provide a seamless conversations with an AI.

Scout Monitoring - Free Django app performance insights with Scout Monitoring
Get Scout setup in minutes, and let us sweat the small stuff. A couple lines in settings.py is all you need to start monitoring your apps. Sign up for our free tier today.
www.scoutapm.com
featured
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured