open_spiel
carla
Our great sponsors
open_spiel | carla | |
---|---|---|
44 | 22 | |
3,969 | 10,347 | |
1.4% | 2.1% | |
9.4 | 8.3 | |
3 days ago | 4 days ago | |
C++ | C++ | |
Apache License 2.0 | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
open_spiel
-
Competitive reinforcement learning for turn-based games
Hi, you can check out OpenSpiel: https://github.com/deepmind/open_spiel/
-
Shimmy 1.0: Gymnasium & PettingZoo bindings for popular external RL environments
This includes single-agent Gymnasium wrappers for DM Control, DM Lab, Behavior Suite, Arcade Learning Environment, OpenAI Gym V21 & V26. Multi-agent PettingZoo wrappers support DM Control Soccer, OpenSpiel and Melting Pot. For more information, read the release notes here:
-
Policy for each of multi-agents in RL
The RL agents in OpenSpiel (https://github.com/deepmind/open_spiel) are designed with this setting as the default (so like, DQN run in Tic-Tac-Toe would have two separate agents learning against each other: one knows how to play as player 1, the other as player 2).
-
My comp sci mentor who is a university student half my age says that learning to use Linux is the optimal comp sci education experience
I maintain a project called OpenSpiel, basically a library/suite of implementations of board games (mainly for AI research but can be used for whatever). It has a C/C++ core, but it also exposes the core API in Python, Rust, Go, and Julia: https://github.com/deepmind/open_spiel/ and a lot of AI algorithms in Python.
-
Looking to get started
If you are looking for some programming game-theoretic algorithms, you can look at Gambit (http://www.gambit-project.org/) or OpenSpiel (https://github.com/deepmind/open_spiel). OpenSpiel has a Julia API too exposing the core and games, but does not have any of the basic game theoretic algorithms in Julia, so that would make a nice exercise (and maybe contribution to the project).
-
[D] Adding a new RL environment to envpool
I looked at the paper before and thought some of the benchmarks in the paper were cherry picked and that the whole let’s write env in compiled languages things has been done a lot already and only works well for games / physics / environments that are extremely well defined . I usually end up back in ray env bc I usually need features like parametric action spaces, or predictive models built inside environments, or historical data. Codingwise I looked at the example file that they had and felt like it wasn’t for me bc I don’t like Bazel ( I use pants for python monorepo ) and the c++ api was overly verbose for me. I don’t feel like I could implement reward shaping or env business logic in a very clear way. I don’t do much c++ dev work though and someone who is a seasoned c++ dev might not care. I liked the openspiel c++ api for env a lot more. https://github.com/deepmind/open_spiel/blob/master/docs/developer_guide.md I have found that building a new env can sometimes be frustrating to debug and I would probably debug a python env before converting to an openspiel/envpool env if I had to use a C++ env for a new problem.
-
Mastering Stratego, the classic game of imperfect information
I 404'ed when I tried to access the source code?
https://github.com/deepmind/open_spiel/tree/master/open_%20s...
Someone needs to create a web front end for this -- I would love to play it.
There's an extra space in the URL to their code (at the end of the article). The correct URL is: https://github.com/deepmind/open_spiel/tree/master/open_spie...
-
Looking for Deepmind implementation of Player of Games
This is a wild guess. I am fairly sure that internally Deepmind uses their own tool, OpenSpiel. The code is kind of dense because it does a lot, but probably most of the functionality that you are looking for is somewhere in there
carla
- What good Autonomous Driving simulators for research?
-
Importing map from google maps
If you are looking for a different simulator, I would suggest using (Carla)[https://carla.org/] with ROS bridge and it also has an inbuilt support for OSM which worked flawlessly (you have to install it from source to get the OSM plugin).
-
[D] Doing my (bachelor) thesis on RL. Which topic do you like best?
(3) I would suggest you use CARLA or TORCS for self-driving cars in RL as they are common test beds.
-
Currently writing out a plan for an RL based path-planning project. (I'm doing it for my Smart Vehicles course in my Master's Degree) Don't have much domain knowledge atm but looking for some advice on how to approach the problem?
Carla: https://github.com/carla-simulator/carla
-
8+ Reinforcement Learning Project Ideas
CARLA
- [R] CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms
-
Is it possible to train a self driving car on google colab?
I've been trying for a while now and I started thinking it may not be possible. If anyone has managed to train a self-driving car simulator using openai gym on google colab(preferably), or on any remote server (AWS, GCP, ...) please let me know. So far, I tried carla, airsim, svl, deepdrive and they are all equally useless unless run locally with a gui. I'd really appreciate if someone suggests some way that actually can make it possible.
-
What is the best source to learn how to build a self-driving car from scratch?
If you're more on the simulation side, you can do it with CARLA: http://carla.org/ You can add almost any sensor type there, create your pipeline, even use Openpilot from Comma ai.
-
Made a selfDrivingCar recently.
Great work! For more data acquisition (perhaps will help the domain gap) you can look into CARLA: https://carla.org
What are some alternatives?
AirSim - Open source simulator for autonomous vehicles built on Unreal Engine / Unity, from Microsoft AI & Research
simulator - A ROS/ROS2 Multi-robot Simulator for Autonomous Vehicles
openpilot - openpilot is an open source driver assistance system. openpilot performs the functions of Automated Lane Centering and Adaptive Cruise Control for 250+ supported car makes and models.
apollo - An open autonomous driving platform
webots - Webots Robot Simulator
muzero-general - MuZero
PettingZoo - An API standard for multi-agent reinforcement learning environments, with popular reference environments and related utilities
gym - A toolkit for developing and comparing reinforcement learning algorithms.
ml-agents - The Unity Machine Learning Agents Toolkit (ML-Agents) is an open-source project that enables games and simulations to serve as environments for training intelligent agents using deep reinforcement learning and imitation learning.
deepdrive - Deepdrive is a simulator that allows anyone with a PC to push the state-of-the-art in self-driving
apollo - 🚀 Apollo/GraphQL integration for VueJS