batched-fn
🦀 Rust server plugin for deploying deep learning models with batched prediction (by epwalsh)
PERSIA
High performance distributed framework for training deep learning recommendation models based on PyTorch. (by PersiaML)
batched-fn | PERSIA | |
---|---|---|
1 | 3 | |
17 | 383 | |
- | 1.3% | |
3.7 | 0.0 | |
about 2 months ago | 3 months ago | |
Rust | Rust | |
Apache License 2.0 | MIT License |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
batched-fn
Posts with mentions or reviews of batched-fn.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2021-03-08.
-
Processing a batch of requests for deep learning inference on a rust server
Some research: - Found a crate for exactly what I want, called batched_fn which seems to do exactly what I want with a catch that I cannot run async tasks (download, preprocess, etc) within the batch handler, ie, it's specifically for inference. I've opened an issue about it. - What I plan to do, is : - The response handlers pass their id's to a batching mechanism, and have a reciver for the output channel(details below) - to have a batching mechanism that batches up image id's on high load. - Pass it to another thread that downloads, preprocesses it and infers form it - This thread passes it to the result channel that every response handler has a reciever for. Every response handler checks if the message that it's reciving is for itself, and accordingly returns a JSON API response
PERSIA
Posts with mentions or reviews of PERSIA.
We have used some of these posts to build our list of alternatives
and similar projects.
-
Researchers Introduce ‘PERSIA’: A PyTorch-Based System for Training Large Scale Deep Learning Recommendation Models up to 100 Trillion Parameters
Github: https://github.com/persiaml/persia
-
[R] Kwai, Kuaishou & ETH Zürich Propose PERSIA, a Distributed Training System That Supports Deep Learning-Based Recommenders of up to 100 Trillion Parameters
Code for https://arxiv.org/abs/2111.05897 found: https://github.com/PersiaML/Persia
The code is available on the project’s GitHub. The paper PERSIA: An Open, Hybrid System Scaling Deep Learning-based Recommenders up to 100 Trillion Parameters is on arXiv.
What are some alternatives?
When comparing batched-fn and PERSIA you can also consider the following projects:
tch-rs - Rust bindings for the C++ api of PyTorch.
tangram - Tangram is an all-in-one automated machine learning framework. [Moved to: https://github.com/tangramdotdev/tangram]
rust - Empowering everyone to build reliable and efficient software.
spotlight - Deep recommender models using PyTorch.
zebra - Zcash - Financial Privacy in Rust 🦓
bastionlab - A simple framework for privacy-friendly data science collaboration
ml-surveys - 📋 Survey papers summarizing advances in deep learning, NLP, CV, graphs, reinforcement learning, recommendations, graphs, etc.
bagua - Bagua Speeds up PyTorch
serenade - Session-based recommender system: Serenade