postgresml
mosec
postgresml | mosec | |
---|---|---|
23 | 11 | |
5,442 | 707 | |
1.8% | 1.4% | |
9.7 | 8.5 | |
5 days ago | 2 days ago | |
Rust | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
postgresml
- PostgresML
-
[P] pgml-chat: A command-line tool for deploying low-latency knowledge-based chatbots
The Python client SDK is so small, because it's just a wrapper around the Rust client SDK: https://github.com/postgresml/postgresml/tree/master/pgml-sdks/rust/pgml. Currently we also support JS/Typescript SDKs as well, all generated from the same safe and efficient underlying Rust implementation, using some fancy Rust macros.
-
Pg_later: Asynchronous Queries for Postgres
I don't think you'd replace a materialized view with pg_later, but it might help you populate or update your materialized view if you are trying to do that asynchronously. pglater.exec() works with DDL too!
I use it a lot for long running queries when doing data science and machine learning work, and a lot of times when executing queries from a jupyter notebook or CLI. That way if my jupyter kernel dies, my query execution continues even if the network or my environment has an issue. I've started using it a bit more with https://github.com/postgresml/postgresml for model training tasks too, since those can be quite long running depending on the situation.
-
Replace pinecone.
PostgresML comes with pgvector as a vector database. The cool thing is it can run your models in the same memory space as a database extension. We’re also working on ggml support for huggingface transformers, but could use some help testing more LLMs for compatibility. https://github.com/postgresml/postgresml/pull/748
-
Python SDK for PostgresML with scalable LLM embedding memory and text generation
We've been working on a Python SDK[1] for PostgresML to make it easier for application developers to get the performance and scalability benefits of integrated memory for LLMs, by combining embedding generation, vector recall and LLM tasks from HuggingFace in a single database query.
This work builds on our previous efforts that give a 10x performance improvement from generating the LLM embedding[2] from input text along with tuning vector recall[3] in a single process to avoid excessive network transit.
We'd love your feedback on our roadmap[4] for this extension, if you have other use cases for an ML application database. So far, we've implemented our best practices for scalable vector storage to provide an example reference implementation for interacting with an ML application database based on Postgres.
[1]: https://github.com/postgresml/postgresml/tree/master/pgml-sd...
-
[P] Python SDK for PostgresML w/ scalable LLM embedding memory and text generation
We've been working on a Python SDK for PostgresML to make it easier for application developers to get the performance and scalability benefits of integrated memory for LLMs, by combining embedding generation, vector recall and LLM tasks from HuggingFace in a single database query.
-
Show HN: We unified LLMs, vector memory, ranking, pruning models in one process
Links:
[1]: https://huggingface.co/spaces/mteb/leaderboard
[2]: https://postgresml.org/blog/generating-llm-embeddings-with-o...
[3]: https://postgresml.org/blog/tuning-vector-recall-while-gener...
[4]: https://postgresml.org/blog/personalize-embedding-vector-sea...
Github: https://github.com/postgresml/postgresml
- Personalize embedding results with application data in your database
-
[P] We've unified LLMs w/ vector memory + reranking & pruning models in a single process for better performance
Github: https://github.com/postgresml/postgresml
-
How to store hugging face model in postgreSQL
I'd encourage you to do inference outside of PostgreSQL (use TF serving and make requests against it, or do batch inference), but if you're determined to do so, they have an extension that integrates with the transformers library and allows for calling models directly from SQL.
mosec
-
20x Faster as the Beginning: Introducing pgvecto.rs extension written in Rust
Mosec - A high-performance serving framework for ML models, offers dynamic batching and CPU/GPU pipelines to fully exploit your compute machine. Simple and faster alternative to NVIDIA Triton.
-
[D] Handling Concurrent Request for ML Model API
- Yes C++ would be better, but you can try mosec. It has a Python interface and helps you handle all the difficult things about Python multiprocessing. The web service part is implemented in Rust thus it's fast enough for machine learning services.
-
Launching ModelZ Beta!
Contribute to open source projects: Modelz is built on top of envd, mosec, modelz-llm and many other open source projects. If you're interested in contributing to these projects, you can check out their GitHub repositories and start contributing.
-
Deploying a model with an API in docker
You could first create the image with the framework you like (e.g. bentoml or https://github.com/mosecorg/mosec for light weight).
- PostgresML is 8-40x faster than Python HTTP microservices
- Python Machine Learning Service Can Run Way More Faster
-
[D] Open Source ML Organisations to contribute to?
If you're interested in machine learning model serving, can check mosec: https://github.com/mosecorg/mosec
-
Why not multiprocessing
During the development of a machine learning serving project Mosec, I used a lot of multiprocessing to make it more efficient. I want to share some experiences and some researches related to Python multiprocessing.
-
[P] Mosec: deploy your machine learning model in an easy and efficient way
That's a good example. I have met the same situation before. I have created a discussion in GitHub to track the DAG progress.
- Mosec: deploy your machine learning model in an easy and efficient way
What are some alternatives?
MindsDB - The platform for customizing AI from enterprise data
BentoML - The most flexible way to serve AI/ML models in production - Build Model Inference Service, LLM APIs, Inference Graph/Pipelines, Compound AI systems, Multi-Modal, RAG as a Service, and more!
Postico - Public issue tracking for Postico
GPflow - Gaussian processes in TensorFlow
Activeloop Hub - Data Lake for Deep Learning. Build, manage, query, version, & visualize datasets. Stream data real-time to PyTorch/TensorFlow. https://activeloop.ai [Moved to: https://github.com/activeloopai/deeplake]
mlrun - MLRun is an open source MLOps platform for quickly building and managing continuous ML applications across their lifecycle. MLRun integrates into your development and CI/CD environment and automates the delivery of production data, ML pipelines, and online applications.
deepchecks - Deepchecks: Tests for Continuous Validation of ML Models & Data. Deepchecks is a holistic open-source solution for all of your AI & ML validation needs, enabling to thoroughly test your data and models from research to production.
text-generation-inference - Large Language Model Text Generation Inference
dskueb
metaflow - :rocket: Build and manage real-life ML, AI, and data science projects with ease!
inference-benchmark - Benchmark for machine learning model online serving (LLM, embedding, Stable-Diffusion, Whisper)