sourmash
mosec
sourmash | mosec | |
---|---|---|
1 | 11 | |
437 | 712 | |
2.5% | 2.1% | |
9.4 | 8.5 | |
2 days ago | 8 days ago | |
Python | Python | |
GNU General Public License v3.0 or later | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
sourmash
-
Any good meta-transcriptomics pipelines
have you seen https://www.nature.com/articles/s41587-019-0209-9?ref=https://githubhelp.com and https://github.com/sourmash-bio/sourmash ?
mosec
-
20x Faster as the Beginning: Introducing pgvecto.rs extension written in Rust
Mosec - A high-performance serving framework for ML models, offers dynamic batching and CPU/GPU pipelines to fully exploit your compute machine. Simple and faster alternative to NVIDIA Triton.
-
[D] Handling Concurrent Request for ML Model API
- Yes C++ would be better, but you can try mosec. It has a Python interface and helps you handle all the difficult things about Python multiprocessing. The web service part is implemented in Rust thus it's fast enough for machine learning services.
-
Launching ModelZ Beta!
Contribute to open source projects: Modelz is built on top of envd, mosec, modelz-llm and many other open source projects. If you're interested in contributing to these projects, you can check out their GitHub repositories and start contributing.
-
Deploying a model with an API in docker
You could first create the image with the framework you like (e.g. bentoml or https://github.com/mosecorg/mosec for light weight).
- PostgresML is 8-40x faster than Python HTTP microservices
- Python Machine Learning Service Can Run Way More Faster
-
[D] Open Source ML Organisations to contribute to?
If you're interested in machine learning model serving, can check mosec: https://github.com/mosecorg/mosec
-
Why not multiprocessing
During the development of a machine learning serving project Mosec, I used a lot of multiprocessing to make it more efficient. I want to share some experiences and some researches related to Python multiprocessing.
-
[P] Mosec: deploy your machine learning model in an easy and efficient way
That's a good example. I have met the same situation before. I have created a discussion in GitHub to track the DAG progress.
- Mosec: deploy your machine learning model in an easy and efficient way
What are some alternatives?
intertext - Detect and visualize text reuse
BentoML - The most flexible way to serve AI/ML models in production - Build Model Inference Service, LLM APIs, Inference Graph/Pipelines, Compound AI systems, Multi-Modal, RAG as a Service, and more!
biobakery_workflows - bioBakery workflows is a collection of workflows and tasks for executing common microbial community analyses using standardized, validated tools and parameters.
GPflow - Gaussian processes in TensorFlow
GEMAP_NCLDV - Genome Mapping Analysis Pipeline for giant viruses
mlrun - MLRun is an open source MLOps platform for quickly building and managing continuous ML applications across their lifecycle. MLRun integrates into your development and CI/CD environment and automates the delivery of production data, ML pipelines, and online applications.
rustplus - Rust+ API Wrapper Written in Python for the Game: Rust
text-generation-inference - Large Language Model Text Generation Inference
sequence_align - Efficient implementations of Needleman-Wunsch and other sequence alignment algorithms written in Rust with Python bindings via PyO3.
metaflow - :rocket: Build and manage real-life ML, AI, and data science projects with ease!
postgresml - The GPU-powered AI application database. Get your app to market faster using the simplicity of SQL and the latest NLP, ML + LLM models.
inference-benchmark - Benchmark for machine learning model online serving (LLM, embedding, Stable-Diffusion, Whisper)