Rubiks-Cube-Reinforcement-Learning Alternatives
Similar projects and alternatives to Rubiks-Cube-Reinforcement-Learning based on common topics and language
-
FinRL
Discontinued Deep Reinforcement Learning Framework to Automate Trading in Quantitative Finance. NeurIPS 2020 & ICAIF 2021. š„ [Moved to: https://github.com/AI4Finance-Foundation/FinRL] (by AI4Finance-LLC)
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
Rubiks-Cube-Reinforcement-Learning reviews and mentions
-
Solving a Rubik's Cube from Scratch;
ā https://i.redd.it/lfjz74cn6wc61.gif For my final year university project I trained an AI to solve a Rubik's Cube purely using reinforcement learning. This project follows the algorithm written by this paper. http://deepcube.igb.uci.edu/static/files/SolvingTheRubiksCubeWithDeepReinforcementLearningAndSearch_Final.pdf. This algorithm works by first training a neural network to output a guess on the number of moves away from the solved position given an initial scrambled position. This was done using simple value iteration. The training dataset was created on the fly by randomly scrambling cubes with depths of 1 to 40. Once training is completed this neural network can be used to solve cubes by using it as a heuristic in an A* search. The classic A* search algorithm was changed to include a depth weighting which trades optimality with speed. Training took around 7 days using one Tesla P100 GPU. Parallel Training definitely should have been used however this would have taken a bunch of work to implement so this was left out. This also meant hyperparameter tuning and network architecture experimenting was pretty limited. Compared to the results in the paper, my AI is slower and less optimal, solving on average taking 60 seconds with solution lengths around 40. However I was extremely happy with the results as I had neither the computational power or experience of the researchers and comparatively with most of the other projects on Github, being able to solve a 3x3 cube at all is an achievement. This algorithm can be transferred to many other puzzles. Iā have successfully trained the 2x2 Cube, 15-Puzzle and 24-Puzzle as well. My github page for the code is here https://github.com/PhadonP/Rubiks-Cube-Reinforcement-Learning. There are many more details shown in the pdf report found in the repo.
Stats
PhadonP/Rubiks-Cube-Reinforcement-Learning is an open source project licensed under MIT License which is an OSI approved license.
The primary programming language of Rubiks-Cube-Reinforcement-Learning is Jupyter Notebook.
Popular Comparisons
Sponsored