envpool
ViZDoom
Our great sponsors
envpool | ViZDoom | |
---|---|---|
3 | 3 | |
1,017 | 1,667 | |
3.5% | 1.4% | |
4.2 | 8.9 | |
about 1 month ago | 3 months ago | |
C++ | C++ | |
Apache License 2.0 | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
envpool
-
How do I improve my SB3 PPO on an EnvPool environment
I am looking to improve the overall performance as well as optimize the wall clock time. I slightly modified the code to develop a SB3 wrapper for envpool from here.
-
[D] Adding a new RL environment to envpool
Envpool provides high parallelization of RL environments. Unfortunately, there are still many environments that are not supported by them. One of them is FrankaKitchen of D4RL, a library for offline RL.
-
[R] EnvPool: A Highly Parallel Reinforcement Learning Environment Execution Engine
Code for https://arxiv.org/abs/2206.10558 found: https://github.com/sail-sg/envpool
ViZDoom
-
Reinforcement learning libraries with AlphaZero
AFAIK AlphaZero has not been used for continuous action space 3d environments like vizdoom, I wouldn't expect it to work well out of the box. There is a basic example demonstrating Q-learning on the environment: https://vizdoom.cs.put.edu.pl/tutorial#learning, as well as numerous example files of various training methods: https://github.com/Farama-Foundation/ViZDoom/tree/master/examples/python
-
ViZDoom 1.2.0: Reinforcement Learning environments based on the 1993 game Doom
For more information about this release and ViZDoom, see https://github.com/Farama-Foundation/ViZDoom, and about Farama Foundation, see https://farama.org/, or join our Discord server: https://discord.gg/nhvKkYa6qX
- ViZDoom has joined the Farama Foundation
What are some alternatives?
ns3-gym - ns3-gym - The Playground for Reinforcement Learning in Networking Research
open_spiel - OpenSpiel is a collection of environments and algorithms for research in general reinforcement learning and search/planning in games.
thread-pool - BS::thread_pool: a fast, lightweight, and easy-to-use C++17 thread pool library
bomberland - Bomberland: a multi-agent AI competition based on Bomberman. This repository contains both starter / hello world kits + the engine source code
matplotlibcpp17 - Alternative to matplotlibcpp with better syntax, based on pybind
nodebuilder - An experimental DOOM Node Builder, written in C++
ecole - Extensible Combinatorial Optimization Learning Environments
AI-Toolbox - A C++ framework for MDPs and POMDPs with Python bindings
Taskflow - A General-purpose Parallel and Heterogeneous Task Programming System
loneliless - A Deep-Q Network playing a single player Pong game. Network done in Python (Tensorflow-gpu) with the single player Pong game implemented in C++ (Openframeworks) and both binded with Pybind11.
pyTORCS-docker - Docker-based, gym-like torcs environment with vision.
odamex - Odamex - Online Multiplayer Doom port with a strong focus on the original gameplay while providing a breadth of enhancements.