Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality. Learn more →
Episodic-transformer-memory-ppo Alternatives
Similar projects and alternatives to episodic-transformer-memory-ppo
-
ml-agents
The Unity Machine Learning Agents Toolkit (ML-Agents) is an open-source project that enables games and simulations to serve as environments for training intelligent agents using deep reinforcement learning and imitation learning.
-
Ray
Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
Gymnasium
An API standard for single-agent reinforcement learning environments, with popular reference environments and related utilities (formerly Gym)
-
ppo-implementation-details
The source code for the blog post The 37 Implementation Details of Proximal Policy Optimization
-
godot_rl_agents
An Open Source package that allows video game creators, AI researchers and hobbyists the opportunity to learn complex behaviors for their Non Player Characters or agents
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
episodic-transformer-memory-ppo reviews and mentions
-
Question about Transformer model input in RL
Check out this implementation https://github.com/MarcoMeter/episodic-transformer-memory-ppo
-
Using transformers in RL?
Maybe this easy-to-follow baseline implementation of PPO + TransformerXL is an inspiration for you.
-
What RL library supports custom LSTM and Transformer neural networks to use with algorithms such as PPO?
I provide baseline implementations on TransformerXL + PPO and LSTM/GRU + PPO. These are designed to be slim and easy-to-follow so that you can advance those implementations to the features and toolset that you need.
-
Trained a Transformer Decoder architecture with PPO, best way to maximize the entropy?
You can also checkout my baseline implementation of PPO + TrXL.
-
TransformerXL + PPO Baseline + MemoryGym
We finally completed a lightweight implementation of a memory-based agent using PPO and TransformerXL (and Gated TransformerXL).
-
A note from our sponsor - InfluxDB
www.influxdata.com | 2 May 2024
Stats
MarcoMeter/episodic-transformer-memory-ppo is an open source project licensed under MIT License which is an OSI approved license.
The primary programming language of episodic-transformer-memory-ppo is Python.
Popular Comparisons
- episodic-transformer-memory-ppo VS godot_rl_agents
- episodic-transformer-memory-ppo VS Gymnasium
- episodic-transformer-memory-ppo VS popgym
- episodic-transformer-memory-ppo VS recurrent-ppo-truncated-bptt
- episodic-transformer-memory-ppo VS brain-agent
- episodic-transformer-memory-ppo VS rl8
- episodic-transformer-memory-ppo VS DI-engine
- episodic-transformer-memory-ppo VS ml-agents
- episodic-transformer-memory-ppo VS ppo-implementation-details
- episodic-transformer-memory-ppo VS Ray
Sponsored