Rubiks-Cube-Reinforcement-Learning
recurrent-ppo-truncated-bptt
Rubiks-Cube-Reinforcement-Learning | recurrent-ppo-truncated-bptt | |
---|---|---|
1 | 6 | |
38 | 106 | |
- | - | |
0.0 | 3.2 | |
over 2 years ago | 19 days ago | |
Jupyter Notebook | Jupyter Notebook | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Rubiks-Cube-Reinforcement-Learning
-
Solving a Rubik's Cube from Scratch;
ā https://i.redd.it/lfjz74cn6wc61.gif For my final year university project I trained an AI to solve a Rubik's Cube purely using reinforcement learning. This project follows the algorithm written by this paper. http://deepcube.igb.uci.edu/static/files/SolvingTheRubiksCubeWithDeepReinforcementLearningAndSearch_Final.pdf. This algorithm works by first training a neural network to output a guess on the number of moves away from the solved position given an initial scrambled position. This was done using simple value iteration. The training dataset was created on the fly by randomly scrambling cubes with depths of 1 to 40. Once training is completed this neural network can be used to solve cubes by using it as a heuristic in an A* search. The classic A* search algorithm was changed to include a depth weighting which trades optimality with speed. Training took around 7 days using one Tesla P100 GPU. Parallel Training definitely should have been used however this would have taken a bunch of work to implement so this was left out. This also meant hyperparameter tuning and network architecture experimenting was pretty limited. Compared to the results in the paper, my AI is slower and less optimal, solving on average taking 60 seconds with solution lengths around 40. However I was extremely happy with the results as I had neither the computational power or experience of the researchers and comparatively with most of the other projects on Github, being able to solve a 3x3 cube at all is an achievement. This algorithm can be transferred to many other puzzles. Iā have successfully trained the 2x2 Cube, 15-Puzzle and 24-Puzzle as well. My github page for the code is here https://github.com/PhadonP/Rubiks-Cube-Reinforcement-Learning. There are many more details shown in the pdf report found in the repo.
recurrent-ppo-truncated-bptt
-
What RL library supports custom LSTM and Transformer neural networks to use with algorithms such as PPO?
I provide baseline implementations on TransformerXL + PPO and LSTM/GRU + PPO. These are designed to be slim and easy-to-follow so that you can advance those implementations to the features and toolset that you need.
- How does a recurrent generator work in PPO?
- LSTM encoder in the policy?
-
what is the best approach to POMDP environment?
Second, when training a limited view agent in a tabular environment, I expected the rppo agent to perform better than cnn-based ppo. But it didn't. I used this repository that was already implemented and saw slow learning based on this.
- LSTM with SAC not learning well on tasks like Mountain Car and Lunar Lander?
- Recurrent PPO using truncated BPTT
What are some alternatives?
FinRL - Deep Reinforcement Learning Framework to Automate Trading in Quantitative Finance. NeurIPS 2020 & ICAIF 2021. š„ [Moved to: https://github.com/AI4Finance-Foundation/FinRL]
ml-agents - The Unity Machine Learning Agents Toolkit (ML-Agents) is an open-source project that enables games and simulations to serve as environments for training intelligent agents using deep reinforcement learning and imitation learning.
pomdp-baselines - Simple (but often Strong) Baselines for POMDPs in PyTorch, ICML 2022
snakeAI - testing MLP, DQN, PPO, SAC, policy-gradient by snake
PPO-PyTorch - Minimal implementation of clipped objective Proximal Policy Optimization (PPO) in PyTorch
neroRL - Deep Reinforcement Learning Framework done with PyTorch
pytorch-a2c-ppo-acktr-gail - PyTorch implementation of Advantage Actor Critic (A2C), Proximal Policy Optimization (PPO), Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation (ACKTR) and Generative Adversarial Imitation Learning (GAIL).
cleanrl - High-quality single file implementation of Deep Reinforcement Learning algorithms with research-friendly features (PPO, DQN, C51, DDPG, TD3, SAC, PPG)
ppo-implementation-details - The source code for the blog post The 37 Implementation Details of Proximal Policy Optimization
popgym - Partially Observable Process Gym
episodic-transformer-memory-ppo - Clean baseline implementation of PPO using an episodic TransformerXL memory
gym-continuousDoubleAuction - A custom MARL (multi-agent reinforcement learning) environment where multiple agents trade against one another (self-play) in a zero-sum continuous double auction. Ray [RLlib] is used for training.