Ray
nimbo
DISCONTINUED
Our great sponsors
Ray | nimbo | |
---|---|---|
42 | 5 | |
30,474 | 123 | |
2.9% | - | |
10.0 | 8.8 | |
7 days ago | over 2 years ago | |
Python | Python | |
Apache License 2.0 | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Ray
-
Open Source Advent Fun Wraps Up!
22. Ray | Github | tutorial
-
TransformerXL + PPO Baseline + MemoryGym
RLlib
-
Elixir Livebook now as a desktop app
I've wondered whether it's easier to add data analyst stuff to Elixir that Python seems to have, or add features to Python that Erlang (and by extension Elixir) provides out of the box.
By what I can see, if you want multiprocessing on Python in an easier way (let's say running async), you have to use something like ray core[0], then if you want multiple machines you need redis(?). Elixir/Erlang supports this out of the box.
Explorer[1] is an interesting approach, where it uses Rust via Rustler (Elixir library to call Rust code) and uses Polars as its dataframe library. I think Rustler needs to be reworked for this usecase, as it can be slow to return data. I made initial improvements which drastically improves encoding (https://github.com/elixir-nx/explorer/pull/282 and https://github.com/elixir-nx/explorer/pull/286, tldr 20+ seconds down to 3).
-
preprocessing millions of records - how to speed up the processing
Dask, Ray(ray.io), or pyspark(if you have a cluster)
-
3% of 666 Python codebases we checked had a silently failing unit test
https://github.com/ansible-community/ara/pull/358 https://github.com/b12io/orchestra/pull/830 https://github.com/batiste/django-page-cms/pull/210 https://github.com/carpentries/amy/pull/2130 https://github.com/celery/django-celery/pull/612 https://github.com/django-cms/django-cms/pull/7241 https://github.com/django-oscar/django-oscar/pull/3867 https://github.com/esrg-knights/Squire/pull/253https://github.com/Frojd/django-react-templatetags/pull/64 https://github.com/groveco/django-sql-explorer/pull/474 https://github.com/jazzband/django-silk/pull/550 https://github.com/keras-team/keras/pull/16073 https://github.com/ministryofjustice/cla_backend/pull/773 https://github.com/nitely/Spirit/pull/306 https://github.com/python/pythondotorg/pull/1987 https://github.com/rapidpro/rapidpro/pull/1610 https://github.com/ray-project/ray/pull/22396 https://github.com/saltstack/salt/pull/61647 https://github.com/Swiss-Polar-Institute/project-application/pull/483 https://github.com/UEWBot/dipvis/pull/216
-
Rust OpenCV - Simple Guide
I'd really want use Rust+OpenCV instead of Python+OpenCV to process a lot of images (xxxxxx pieces on a central NAS). I would want to do it by also splitting the work over multiple worker nodes for speed. Unfortunately, I've so far not had the time to figure this out... Meanwhile, a Rust API for Ray is being worked on! https://github.com/ray-project/ray/issues/20609
-
Blazer - HPC python library for MPI workflows
ray.io doesn't support MPI natively. And thus is not "supercomputer" friendly. Blazer runs on MPI which runs across the NUMA (non-unified memory architecture) setup of a supercomputer. The compute interconnect is 100's of times faster than network remoting, which ray.io uses.
-
JORLDY: OpenSource Reinforcement Learning Framework
Distributed RL algorithms are provided using ray
-
Python stands to lose its GIL, and gain a lot of speed
I had a similar use case and ended up using ray. https://github.com/ray-project/ray
-
How to deploy a rllib-trained model?
Currently, rllib's "--export-formats" does nothing; I have folders of checkpoints, but no models. Looks like currently the internal export_model function isn't implemented: https://github.com/ray-project/ray/issues/19021
nimbo
-
Show HN: SpotML – Managed ML Training on Cheap AWS/GCP Spot Instances
Seems like Nimbo (https://nimbo.sh) has a Business Source License (https://github.com/nimbo-sh/nimbo/blob/master/LICENSE), so you might want to check with them regarding licensing terms for a startup that is using their code and/or docs in "production"?
Otherwise, this idea is interesting and probably generalizable to other applications. Maybe it's not crystal clear to me, but what are the advantages of your service over existing solutions such as Nimbo and Spotty? FWIW it might be worthwhile adding this to your website.
Good luck!
You should really mention / give attribution / emphasize more that this is a fork of https://spotty.cloud and you took a lot from https://github.com/nimbo-sh/nimbo as well.
-
[P] Nimbo: Run jobs on AWS with a single command
My friend and I just launched Nimbo, a dead-simple CLI that wraps AWS CLI, allowing you to run code on AWS as if you were running it locally. GitHub: https://github.com/nimbo-sh/nimbo. Docs: https://docs.nimbo.sh.
What are some alternatives?
optuna - A hyperparameter optimization framework
stable-baselines3 - PyTorch version of Stable Baselines, reliable implementations of reinforcement learning algorithms.
Faust - Python Stream Processing
gevent - Coroutine-based concurrency library for Python
stable-baselines - A fork of OpenAI Baselines, implementations of reinforcement learning algorithms
SCOOP (Scalable COncurrent Operations in Python) - SCOOP (Scalable COncurrent Operations in Python)
Thespian Actor Library - Python Actor concurrency library
Dask - Parallel computing with task scheduling
django-celery - Old Celery integration project for Django
pymarl - Python Multi-Agent Reinforcement Learning framework
ElegantRL - Massively Parallel Deep Reinforcement Learning. 🔥
eventlet - Concurrent networking library for Python