tianshou
pytorch-learn-reinforcement-learning
tianshou | pytorch-learn-reinforcement-learning | |
---|---|---|
8 | 3 | |
7,406 | 139 | |
1.3% | - | |
9.5 | 0.0 | |
6 days ago | almost 3 years ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
tianshou
-
Is it better to not use the Target Update Frequency in Double DQN or depends on the application?
The tianshou implementation I found at https://github.com/thu-ml/tianshou/blob/master/tianshou/policy/modelfree/dqn.py is DQN by default.
- 他們能回來嗎
-
Multi-Agent Stable Baselines
https://github.com/thu-ml/tianshou Imho there isn't a library that has it all, RLlib is quite good too, but I think that Tianshou is more similar to Pytorch and that helps to change the internals more intuitively and know what you are doing.
-
Question about the old policy and new policy in TRPO code
Good point...I'll check in more detail when I get a chance later today! I would suggest looking at a more recent implementation like https://github.com/DLR-RM/stable-baselines3 or https://github.com/thu-ml/tianshou if you're trying to build. https://spinningup.openai.com/en/latest/algorithms/trpo.html is particularly good for understanding
-
Tensorflow vs PyTorch for A3C
Do you absolutely need A3C? A2C has become more widely used (see, e.g., the comment in https://github.com/ikostrikov/pytorch-a3c, and the fact that both https://github.com/thu-ml/tianshou and https://github.com/facebookresearch/salina have A2C implementations, but no A3C at first glance).
-
"Tianshou: a Highly Modularized Deep Reinforcement Learning Library", Weng et al 2021 (Python PyTorch MuJuCo; PPO, DQN, A2C, DDPG, SAC, TD3, REINFORCE, NPG, TRPO, ACKTR)
Code for https://arxiv.org/abs/2107.14171 found: https://github.com/thu-ml/tianshou/
Get the code for Tianshou here (GitHub).
-
Best PyTorch RL library for doing research
I tried tianshou and thought it was well-designed for modularity, but it was early in development when I tried and missing some basic features
pytorch-learn-reinforcement-learning
What are some alternatives?
stable-baselines3 - PyTorch version of Stable Baselines, reliable implementations of reinforcement learning algorithms.
Super-mario-bros-PPO-pytorch - Proximal Policy Optimization (PPO) algorithm for Super Mario Bros
cleanrl - High-quality single file implementation of Deep Reinforcement Learning algorithms with research-friendly features (PPO, DQN, C51, DDPG, TD3, SAC, PPG)
Tetris-deep-Q-learning-pytorch - Deep Q-learning for playing tetris game
ElegantRL - Massively Parallel Deep Reinforcement Learning. 🔥
6DRepNet - Official Pytorch implementation of 6DRepNet: 6D Rotation representation for unconstrained head pose estimation.
seed_rl - SEED RL: Scalable and Efficient Deep-RL with Accelerated Central Inference. Implements IMPALA and R2D2 algorithms in TF2 with SEED's architecture.
Amortized-SVGD-GAN - Learning to draw samples: with application to amortized maximum likelihood estimator for generative adversarial learning
pytorch-a3c - PyTorch implementation of Asynchronous Advantage Actor Critic (A3C) from "Asynchronous Methods for Deep Reinforcement Learning".
nes-torch - Minimal PyTorch Library for Natural Evolution Strategies
Deep-Reinforcement-Learning-Algorithms-with-PyTorch - PyTorch implementations of deep reinforcement learning algorithms and environments
pytorch-GAT - My implementation of the original GAT paper (Veličković et al.). I've additionally included the playground.py file for visualizing the Cora dataset, GAT embeddings, an attention mechanism, and entropy histograms. I've supported both Cora (transductive) and PPI (inductive) examples!