Youtube-Code-Repository
Repository for most of the code from my YouTube channel (by philtabor)
lunar-lander
By mugeshk97
Youtube-Code-Repository | lunar-lander | |
---|---|---|
5 | 2 | |
844 | 0 | |
- | - | |
1.6 | 0.0 | |
10 months ago | about 3 years ago | |
Python | Python | |
- | MIT License |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Youtube-Code-Repository
Posts with mentions or reviews of Youtube-Code-Repository.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-04-28.
-
Overall loss in PPO, why does it matter?
In Phil tabor's implementation it calculates Actor and Critic loss separately (line 95+) and does not calculate equation 9.
-
Intrinsic Curiosity Module Pytorch multithreading cpu unable to fix seeds
I am working on an extension of this implementation https://github.com/philtabor/Youtube-Code-Repository/tree/master/ReinforcementLearning/ICM of the intrinsic curiosity module. It uses A3C(Actor -critic) as a policy and the ICM is a bolt on module.
-
PPO cannot play CartPole ?
A very good performance reference code, which convers in 200 episodes.
-
Rl algorithm implemented
Github code - https://github.com/philtabor/Youtube-Code-Repository/tree/master/ReinforcementLearning/PolicyGradient/DDPG/tensorflow2/pendulum
-
Lunar Lander using Deep Q-Learning
I was wondering why the code looked so familiar, not just the design, but even the syntax and names of functions. I went through these myself when I was learning: Youtube-Code-Repository/ReinforcementLearning/DeepQLearning at master ยท philtabor/Youtube-Code-Repository (github.com). Its by a YouTuber / Udemy course instructor that goes through the design and coding process from scratch. This is probably mostly lifted straight from that repo. He even has a video on doing the lunar lander example too.
lunar-lander
Posts with mentions or reviews of lunar-lander.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2021-03-18.
What are some alternatives?
When comparing Youtube-Code-Repository and lunar-lander you can also consider the following projects:
Respiratory-Disease-Coughing-Dataset-CNN - A collection of coughing audio files from Coswara, Coughvid, and Virufy as well as generated spectrograms for the use of machine learning
RL-Algorithms - This repository has RL algorithms implemented using python
easytorch - EasyTorch is a research-oriented pytorch prototyping framework with a straightforward learning curve. It is highly robust and contains almost everything needed to perform any state-of-the-art experiments.
ppo-implementation-details - The source code for the blog post The 37 Implementation Details of Proximal Policy Optimization
Quantsbin - Quantitative Finance tools
minimalRL - Implementations of basic RL algorithms with minimal lines of codes! (pytorch based)