server
jina
server | jina | |
---|---|---|
24 | 126 | |
7,384 | 20,085 | |
2.7% | 1.3% | |
9.5 | 9.1 | |
about 11 hours ago | 18 days ago | |
Python | Python | |
BSD 3-clause "New" or "Revised" License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
server
- FLaNK Weekly 08 Jan 2024
- Is there any open source app to load a model and expose API like OpenAI?
- "A matching Triton is not available"
-
best way to serve llama V2 (llama.cpp VS triton VS HF text generation inference)
I am wondering what is the best / most cost-efficient way to serve llama V2. - llama.cpp (is it production ready or just for playing around?) ? - Triton inference server ? - HF text generation inference ?
- Triton Inference Server - Backend
-
Single RTX 3080 or two RTX 3060s for deep learning inference?
For inference of CNNs, memory should really not be an issue. If it is a software engineering problem, not a hardware issue. FP16 or Int8 for weights is fine and weight size won’t increase due to the high resolution. And during inference memory used for hidden layer tensors can be reused as soon as the last consumer layer has been processed. You likely using something that is designed for training for inference and that blows up the memory requirement, or if you are using TensorRT or something like that, you need to be careful to avoid that every tasks loads their own copy of the library code into the GPU. Maybe look at https://github.com/triton-inference-server/server
-
Machine Learning Inference Server in Rust?
I am looking for something like [Triton Inference Server](https://github.com/triton-inference-server/server) or [TFX Serving](https://www.tensorflow.org/tfx/guide/serving), but in Rust. I came across [Orkon](https://github.com/vertexclique/orkhon) which seems to be dormant and a bunch of examples off of the [Awesome-Rust-MachineLearning](https://github.com/vaaaaanquish/Awesome-Rust-MachineLearning)
-
Multi-model serving options
You've already mentioned Seldon Core which is well worth looking at but if you're just after the raw multi-model serving aspect rather than a fully-fledged deployment framework you should maybe take a look at the individual inference servers: Triton Inference Server and MLServer both support multi-model serving for a wide variety of frameworks (and custom python models). MLServer might be a better option as it has an MLFlow runtime but only you will be able to decide that. There also might be other inference servers that do MMS that I'm not aware of.
-
I mean,.. we COULD just make our own lol
[1] https://docs.nvidia.com/launchpad/ai/chatbot/latest/chatbot-triton-overview.html[2] https://github.com/triton-inference-server/server[3] https://neptune.ai/blog/deploying-ml-models-on-gpu-with-kyle-morris[4] https://thechief.io/c/editorial/comparison-cloud-gpu-providers/[5] https://geekflare.com/best-cloud-gpu-platforms/
-
Why TensorFlow for Python is dying a slow death
"TensorFlow has the better deployment infrastructure"
Tensorflow Serving is nice in that it's so tightly integrated with Tensorflow. As usual that goes both ways. It's so tightly coupled to Tensorflow if the mlops side of the solution is using Tensorflow Serving you're going to get "trapped" in the Tensorflow ecosystem (essentially).
For pytorch models (and just about anything else) I've been really enjoying Nvidia Triton Server[0]. Of course it further entrenches Nvidia and CUDA in the space (although you can execute models CPU only) but for a deployment today and the foreseeable future you're almost certainly going to be using a CUDA stack anyway.
Triton Server is very impressive and I'm always surprised to see how relatively niche it is.
[0] - https://github.com/triton-inference-server/server
jina
- Jina.ai: Self-host Multimodal models
- FLaNK Stack Weekly for 30 Oct 2023
-
Cross data type search that wasn’t supported well using Elasticsearch
Jina mainly because of their use of neural networks and AI.
- Recommend a Lightweight Launcher with Nested Folders
-
I plan to build my own AI powered search engine for my portfolio. Do you know ones that are open-source?
Jina - It’s an open-source project where you can build search engines. Well maybe not no code but it claims that you only need a few lines of code for creating projects. The project supports semantic, text, image, audio, and video search. What I’m also interested in is with their neural search and generative AI. I’m also interested in the amount of github repo that they have. I have this on my radar since this is also something I was interested in.
-
How can we match images in our database?
Do you guys have any ideas how we can match images on our database? We’re working on a project that about matching images on our database. We were trying to use SIFT and some other similar methods, but for some reason, nothing doesn’t seem to be working that well. Does anyone have any suggestions for the most effective way to do this? Maybe some open-source solutions like HuggingFace or Jina AI? We just want to make sure our image matching is correct and that part’s been a bit of a struggle on our part.
-
Can AI 3D model search engines be a thing this year?
The tech lets you find 3D models without sifting through tons of text - An information retrieval framework does the heavy lifting and compares models to each other, no descriptions or keywords needed.
-
Any MLOps platform you use?
Jina AI -They offer a neural search solution that can help build smarter, more efficient search engines. They also have a list of cool github repos that you can check out. Similar to Vertex AI, they have image classification tools, NLPs, fine tuners etc.
-
This week(s) in DocArray
Well, it's not exactly a new feature, but we've been working on early support for DocArray v2 in Jina.
-
Multi-model serving options
Jina let’s you serve all of your models through the same Gateway while deploying them as individual microservices. You can also tie your models together in a pipeline if needed. Also some nice ML focussed features such as dynamic batching.
What are some alternatives?
DeepSpeed - DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
Weaviate - Weaviate is an open-source vector database that stores both objects and vectors, allowing for the combination of vector search with structured filtering with the fault tolerance and scalability of a cloud-native database.
onnx-tensorrt - ONNX-TensorRT: TensorRT backend for ONNX
haystack - :mag: LLM orchestration framework to build customizable, production-ready LLM applications. Connect components (models, vector DBs, file converters) to pipelines or agents that can interact with your data. With advanced retrieval methods, it's best suited for building RAG, question answering, semantic search or conversational agent chatbots.
ROCm - AMD ROCm™ Software - GitHub Home [Moved to: https://github.com/ROCm/ROCm]
dalle-flow - 🌊 A Human-in-the-Loop workflow for creating HD images from text
pinferencia - Python + Inference - Model Deployment library in Python. Simplest model inference server ever.
whoogle-search - A self-hosted, ad-free, privacy-respecting metasearch engine
Triton - Triton is a dynamic binary analysis library. Build your own program analysis tools, automate your reverse engineering, perform software verification or just emulate code.
es-clip-image-search - Sample implementation of natural language image search with OpenAI's CLIP and Elasticsearch or Opensearch.
Megatron-LM - Ongoing research training transformer models at scale
growthbook - Open Source Feature Flagging and A/B Testing Platform