AirSim
rlcard
Our great sponsors
AirSim | rlcard | |
---|---|---|
10 | 5 | |
15,844 | 2,696 | |
1.1% | 3.8% | |
0.0 | 6.2 | |
14 days ago | 3 months ago | |
C++ | Python | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
AirSim
- Modding API for old game: Strategies to ensure it runs on older systems while not losing productivity?
-
Replay system (gamemode from c++)
Have created all widgets, code compile successfully but have problem with replay spectator. Im using plugin AirSim https://github.com/microsoft/AirSim it is plugin that simulate drone. It has own gamemode AirsimGamemode (here is c++ file https://github.com/microsoft/AirSim/tree/main/Unreal/Plugins/AirSim/Source) that spawn drone at start. Problem is that I cant assign BP_PC_Spectator there. So I created own gamemode that I thought will override AirSimGameMode. It kinda did, it spawned drone on start but replay spectator widget show record screen, but it is not shown on play replay screen and I cannot move there as well.
- Heat map of environment monitored by drone
- 3D heatmap of environment monitored by drone
-
Airsim, ROS, can msg be shared between packages/nodes ?
Hi, I implemented path planner as node in ROS. Now I want to try the route planner in the AirSim simulator. During path execution, I want to get outputs from sensors such as GPS, IMU and Lidar. AirSim come with build in wrapper (https://github.com/microsoft/AirSim/tree/main/ros/src/airsim_ros_pkgs) that create topic and services once launched. Wrapper create two nodes, one to obtain sensors data and one to control drone. Wrapper is build in AirSim directory and path planer is in another. Is it possible to share msg so I can call and subscribe to wrapper topics in my planner node ? Or do I have to write msg for every topic, service I want to use ?
-
Destruction of a russian fuel truck
And there are also some open-source projects you could join, instead of starting a new one. This one looks interesting, haven't tried it though: https://github.com/Microsoft/AirSim
-
What happened in IoT last two months? Here are some headlines I found interesting
In 2017 Microsoft created AirSim, an open-source simulation platform for AI research and experimentation with drones and cars. This July, Microsoft announced the upcoming release of a new simulation platform and the archive of the original 2017 AirSim.
-
Currently writing out a plan for an RL based path-planning project. (I'm doing it for my Smart Vehicles course in my Master's Degree) Don't have much domain knowledge atm but looking for some advice on how to approach the problem?
AirSim: https://github.com/microsoft/AirSim
-
8+ Reinforcement Learning Project Ideas
AirSim
-
Is it possible to train a self driving car on google colab?
I've been trying for a while now and I started thinking it may not be possible. If anyone has managed to train a self-driving car simulator using openai gym on google colab(preferably), or on any remote server (AWS, GCP, ...) please let me know. So far, I tried carla, airsim, svl, deepdrive and they are all equally useless unless run locally with a gui. I'd really appreciate if someone suggests some way that actually can make it possible.
rlcard
- [P] Looking for RL or rules-based No-Limit Hold 'Em Work
-
Self play environments
Hi. I’ve decided to do a project to adapt an rl library to support self-play. This is a project so I can teach myself more about building rl systems. I’ve been considering working with the environment system from rlcard https://github.com/datamllab/rlcard/ but wonder if there are other more widely-used self play environment libraries. Thanks.
-
[Project] Making a Poker AI - having trouble with the form of ML to make smart / strong decisions
Can you point me to some active forums for poker bot building? I can only find github repo like https://github.com/datamllab/rlcard, which is mostly reinforcement learning. Whereas SoTA approach like Pluribus is more about game theory.
-
8+ Reinforcement Learning Project Ideas
Build a Poker bot with RLCard
-
What sort of algorithm should I use ? Incomplete information, card game. (Flowchart for reference)
Probably the easiest way for you to get started is to implement your game on an open source RL framework that has working implementations of some basic CFR variations as well as some other self-play algorithms such as NFSP. OpenSpiel and RLCard are two that I am aware of. Depending on the complexity of your game and how strong your agent needs to play, you might be satisfied with the performance you get using by one of these frameworks.
What are some alternatives?
carla - Open-source simulator for autonomous driving research.
gym - A toolkit for developing and comparing reinforcement learning algorithms.
ml-agents - The Unity Machine Learning Agents Toolkit (ML-Agents) is an open-source project that enables games and simulations to serve as environments for training intelligent agents using deep reinforcement learning and imitation learning.
open_spiel - OpenSpiel is a collection of environments and algorithms for research in general reinforcement learning and search/planning in games.
mjai-reviewer - 🔍🀄️ Review mahjong game log with mjai-compatible mahjong AI.
GAAS - GAAS is an open-source program designed for fully autonomous VTOL(a.k.a flying cars) and drones. GAAS stands for Generalized Autonomy Aviation System.
Poker - Fully functional Pokerbot that works on PartyPoker, PokerStars and GGPoker, scraping tables with Open-CV (adaptable via gui) or neural network and making decisions based on a genetic algorithm and montecarlo simulation for poker equity calculation. Binaries can be downloaded with this link:
Autonomous-Ai-drone-scripts - State of the art autonomous navigation scripts using Ai, Computer Vision, Lidar and GPS to control an arducopter based quad copter.
MonsterHunterPortable3rdHDRemake - Personal fork of a texture upscaling project for PSP's Monster Hunter Portable 3rd
apollo - An open autonomous driving platform
shengji - An online version of shengji (a.k.a. tractor) and zhaopengyou (a.k.a. Finding Friends)