nle | qw | |
---|---|---|
15 | 3 | |
932 | 38 | |
0.4% | - | |
3.7 | 0.0 | |
9 days ago | about 3 years ago | |
C | Shell | |
GNU General Public License v3.0 or later | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
nle
- What if we set GPT-4 free in Minecraft?
-
Voyager: An LLM-powered learning agent in Minecraft
precisely, I really hope someone does Nethack next and leverages the learning environment that's already customized for it.
-
Analyzer for Nethack idea - problem with getting data from another program
You should look at The Nethack Learning Environment.
-
[D] We're the Meta AI research team behind CICERO, the first AI agent to achieve human-level performance in the game Diplomacy. We’ll be answering your questions on December 8th starting at 10am PT. Ask us anything!
There's quite a few open-source Reinforcement Learning challenges that you can explore with modest amounts of compute in order to build some experience training RL models, for example the Nethack Learning Environment, Atari, Minigrid, etc. For me personally, I had only worked in NLP / dialogue for years but got into RL by implementing Random Network Distillation models for NetHack. It's a fun area that definitely has its own unique challenges vs other domains. -AM
- Facebook AI which plays NetHack
- The NetHack Learning Environment
-
Hacker News top posts: Nov 12, 2022
The NetHack Learning Environment\ (2 comments)
qw
- Facebook AI which plays NetHack
-
What are the advantages of a turn-based roguelike vs a realtime action roguelike-like?
You can get bots to play some roguelikes for you. See, for instance, this one for Dungeon Crawl Stone Soup: https://github.com/elliptic/qw
-
Streaming while coding the DCSS AI API
This is a really cool project! In your paper, you mentioned the qw bot which has a 15% win rate (or did - looks like it hasn't been updated in 4 years, so idk if it would even work on recent versions) using hand-coded lua rules in a rc file. I know your focus with the wrapper is more on dynamic approaches like reinforcement learning, but I'm curious: if someone did want to make another qw-like agent with totally hand-coded heuristics, do you think your wrapper would be easier to work with than the clua interface?
What are some alternatives?
wa-tunnel - Tunneling Internet traffic over Whatsapp
dcss-ai-wrapper - An API for Dungeon Crawl Stone Soup for Artificial Intelligence research.
stable-baselines3 - PyTorch version of Stable Baselines, reliable implementations of reinforcement learning algorithms.
LeanQt - LeanQt is a stripped-down Qt version easy to build from source and to integrate with an application.
BotHack - BotHack – A Nethack Bot Framework
RL-Adventure - Pytorch Implementation of DQN / DDQN / Prioritized replay/ noisy networks/ distributional values/ Rainbow/ hierarchical RL
Voyager - An Open-Ended Embodied Agent with Large Language Models
ghostly - Ghostly is a simple, lightweight, and fast full-stack Go framework
machin - Reinforcement learning library(framework) designed for PyTorch, implements DQN, DDPG, A2C, PPO, SAC, MADDPG, A3C, APEX, IMPALA ...
maze - Maze Applied Reinforcement Learning Framework
crowd-jpeg