Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality. Learn more →
Top 10 soft-actor-critic Open-Source Projects
-
softlearning
Softlearning is a reinforcement learning framework for training maximum entropy policies in continuous domains. Includes the official implementation of the Soft Actor-Critic algorithm.
-
Popular-RL-Algorithms
PyTorch implementation of Soft Actor-Critic (SAC), Twin Delayed DDPG (TD3), Actor-Critic (AC/A2C), Proximal Policy Optimization (PPO), QT-Opt, PointNet..
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
Deep-Reinforcement-Learning-Algorithms
32 projects in the framework of Deep Reinforcement Learning algorithms: Q-learning, DQN, PPO, DDPG, TD3, SAC, A2C and others. Each project is provided with a detailed training log.
-
jaxrl
JAX (Flax) implementation of algorithms for Deep Reinforcement Learning with continuous action spaces.
-
learning-to-drive-in-5-minutes
Implementation of reinforcement learning approach to make a car learn to drive smoothly in minutes
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
-
Meta-SAC
Auto-tune the Entropy Temperature of Soft Actor-Critic via Metagradient - 7th ICML AutoML workshop 2020
Project mention: Problem with Truncated Quantile Critics (TQC) and n-step learning algorithm. | /r/reinforcementlearning | 2023-12-09# see https://github.com/rail-berkeley/softlearning/issues/60
Have you looked at this repo or this repo ?
Project mention: Problem with Truncated Quantile Critics (TQC) and n-step learning algorithm. | /r/reinforcementlearning | 2023-12-09Hi all! I'm implementing a TQC with n-step learning in Trackmania (I forked original repo from here: https://github.com/trackmania-rl/tmrl, my modified version here: https://github.com/Pheoxis/AITrackmania/tree/main). It compiles, but I am pretty sure that I implemented n-step learning incorrectly, but as a beginner I don't know what I did wrong. Here's my code before implementing n-step algorithm: https://github.com/Pheoxis/AITrackmania/blob/main/tmrl/custom/custom_algorithms.py. If anyone checked what I did wrong, I would be very grateful. I will also attach some plots from my last training and outputs from printed lines (print.txt), maybe it will help :) If you need any additional information feel free to ask.
soft-actor-critic related posts
-
Problem with Truncated Quantile Critics (TQC) and n-step learning algorithm.
-
Training an unbeatable AI in Trackmania [video]
-
JAX in Reinforcement Learning
-
Can you beat trackmania AI?
-
Python RL Environments on Windows
-
Infinite Horizon problem with SAC and custom environment
-
AI Learns Mario Kart Wii (Rainbow DQN)
-
A note from our sponsor - InfluxDB
www.influxdata.com | 20 May 2024
Index
What are some of the best open-source soft-actor-critic projects? This list will help you:
Project | Stars | |
---|---|---|
1 | softlearning | 1,166 |
2 | Popular-RL-Algorithms | 997 |
3 | autonomous-learning-library | 639 |
4 | Deep-Reinforcement-Learning-Algorithms | 580 |
5 | jaxrl | 576 |
6 | tmrl | 431 |
7 | learning-to-drive-in-5-minutes | 277 |
8 | Meta-SAC | 28 |
9 | soft-actor-critic | 1 |
10 | senza | 1 |
Sponsored