PERSIA
High performance distributed framework for training deep learning recommendation models based on PyTorch. (by PersiaML)
batched-fn
🦀 Rust server plugin for deploying deep learning models with batched prediction (by epwalsh)
Our great sponsors
PERSIA | batched-fn | |
---|---|---|
3 | 1 | |
381 | 17 | |
2.1% | - | |
0.0 | 3.7 | |
2 months ago | about 1 month ago | |
Rust | Rust | |
MIT License | Apache License 2.0 |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
PERSIA
Posts with mentions or reviews of PERSIA.
We have used some of these posts to build our list of alternatives
and similar projects.
We haven't tracked posts mentioning PERSIA yet.
Tracking mentions began in Dec 2020.
batched-fn
Posts with mentions or reviews of batched-fn.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2021-03-08.
-
Processing a batch of requests for deep learning inference on a rust server
Some research: - Found a crate for exactly what I want, called batched_fn which seems to do exactly what I want with a catch that I cannot run async tasks (download, preprocess, etc) within the batch handler, ie, it's specifically for inference. I've opened an issue about it. - What I plan to do, is : - The response handlers pass their id's to a batching mechanism, and have a reciver for the output channel(details below) - to have a batching mechanism that batches up image id's on high load. - Pass it to another thread that downloads, preprocesses it and infers form it - This thread passes it to the result channel that every response handler has a reciever for. Every response handler checks if the message that it's reciving is for itself, and accordingly returns a JSON API response
What are some alternatives?
When comparing PERSIA and batched-fn you can also consider the following projects:
tangram - Tangram is an all-in-one automated machine learning framework. [Moved to: https://github.com/tangramdotdev/tangram]
tch-rs - Rust bindings for the C++ api of PyTorch.
spotlight - Deep recommender models using PyTorch.
ml-surveys - 📋 Survey papers summarizing advances in deep learning, NLP, CV, graphs, reinforcement learning, recommendations, graphs, etc.
bastionlab - A simple framework for privacy-friendly data science collaboration
bagua - Bagua Speeds up PyTorch
serenade - Session-based recommender system: Serenade
zebra - Zcash - Financial Privacy in Rust 🦓
rust - Empowering everyone to build reliable and efficient software.