client
ollama
client | ollama | |
---|---|---|
2 | 213 | |
494 | 66,540 | |
5.5% | 23.9% | |
9.4 | 9.9 | |
4 days ago | 6 days ago | |
C++ | Go | |
BSD 3-clause "New" or "Revised" License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
client
-
Ollama releases OpenAI API compatibility
- While keeping power utilization below X
They will take the exported model and dynamically deploy the package to a triton instance running on your actual inference serving hardware, then generate requests to meet your SLAs to come up with the optimal model configuration. You even get exported metrics and pretty reports for every configuration used/attempted. You can take the same exported package, change the SLA params, and it will automatically re-generate the configuration for you.
- Performance on a completely different level. TensorRT-LLM especially is extremely new and very early but already at high scale you can start to see > 10k RPS on a single node.
- gRPC support. Especially when using pre/post processing, ensemble, etc you can configure clients programmatically to use the individual models or the ensemble chain (as one example). This opens up a very wide range of powerful architecture options that simply aren't available anywhere else. gRPC could probably be thought of as AsyncLLMEngine, it can abstract actual input/output or expose raw in/out so models, tokenizers, decoders, etc can send/receive raw data/numpy/tensors.
- DALI support[5]. Combined with everything above, you can add DALI in the processing chain to do things like take input image/audio/etc, copy to GPU once, GPU accelerate scaling/conversion/resampling/whatever, and get output.
vLLM and HF TGI are very cool and I use them in certain cases. The fact you can give them a HF model and they just fire up with a single command and offer good performance is very impressive but there are an untold number of reasons these providers use Triton. It's in a class of its own.
[0] - https://mistral.ai/news/la-plateforme/
[1] - https://www.cloudflare.com/press-releases/2023/cloudflare-po...
[2] - https://www.nvidia.com/en-us/case-studies/amazon-accelerates...
[3] - https://github.com/triton-inference-server/model_navigator
[4] - https://github.com/triton-inference-server/client/blob/main/...
[5] - https://github.com/triton-inference-server/dali_backend
-
Show HN: Software for Remote GPU-over-IP
Inference servers essentially turn a model running on CPU and/or GPU hardware into a microservice.
Many of them support the kserve API standard[0] that supports everything from model loading/unloading to (of course) inference requests across models, versions, frameworks, etc.
So in the case of Triton[1] you can have any number of different TensorFlow/torch/tensorrt/onnx/etc models, versions, and variants. You can have one or more Triton instances running on hardware with access to local GPUs (for this example). Then you can put standard REST and or grpc load balancers (or whatever you want) in front of them, hit them via another API, whatever.
Now all your applications need to do to perform inference is do an HTTP POST (or use a client[2]) for model input, Triton runs it on a GPU (or CPU if you want), and you get back whatever the model output is.
Not a sales pitch for Triton but it (like some others) can also do things like dynamic batching with QoS parameters, automated model profiling and performance optimization[3], really granular control over resources, response caching, python middleware for application/biz logic, accelerated media processing with Nvidia DALI, all kinds of stuff.
[0] - https://github.com/kserve/kserve
[1] - https://github.com/triton-inference-server/server
[2] - https://github.com/triton-inference-server/client
[3] - https://github.com/triton-inference-server/model_analyzer
ollama
- Ollama v0.1.34 Is Out
-
Ask HN: What do you use local LLMs for?
- Basic internet search (I start ollama CLI faster than I can start a browser - https://ollama.com)
- Formatting/changing text
- Troubleshooting code, esp. new frameworks/libs
- Recipes
- Data entry
- Organizing thoughts: High-level lists, comparison, classification, synonyms, jargon & nomenclature
- Learning esp. by analogy and example
RAG for:
- Website assistants (https://github.com/bennyschmidt/ragdoll-studio/tree/master/e...)
- Game NPCs (https://github.com/bennyschmidt/ragdoll-studio/tree/master/e...)
- Discord/Slack/forum bots (https://github.com/bennyschmidt/ragdoll-studio/tree/master/e...)
- Character-driven storytelling and creating art in a specific style for video game loading screens, background images, avatars, website art, etc. (https://github.com/bennyschmidt/ragdoll-studio/tree/master/r...)
- FLaNK-AIM Weekly 06 May 2024
-
Introducing Jan
Jan goes a step further by integrating with other local engines like LM Studio and ollama.
- Ollama v0.1.33
-
Hindi-Language AI Chatbot for Enterprises Using Qdrant, MLFlow, and LangChain
# install the Ollama curl -fsSL https://ollama.com/install.sh | sh # get the llama3 model ollama pull llama2 # install the MLFlow pip install mlflow
-
Create an AI prototyping environment using Jupyter Lab IDE with Typescript, LangChain.js and Ollama for rapid AI prototyping
Ollama for running LLMs locally
-
Setup Llama 3 using Ollama and Open-WebUI
curl -fsSL https://ollama.com/install.sh | sh
-
Ollama v0.1.33 with Llama 3, Phi 3, and Qwen 110B
Streaming is not a problem (it's just a simple flag: https://github.com/wiktor-k/llama-chat/blob/main/index.ts#L2...) but I've never used voice input.
The examples show image input though: https://github.com/ollama/ollama/blob/main/docs/api.md#reque...
Maybe you can file an issue here: https://github.com/ollama/ollama/issues
-
I Said Goodbye to ChatGPT and Hello to Llama 3 on Open WebUI - You Should Too
I’m a huge fan of open source models, especially the newly release Llama 3. Because of the performance of both the large 70B Llama 3 model as well as the smaller and self-host-able 8B Llama 3, I’ve actually cancelled my ChatGPT subscription in favor of Open WebUI, a self-hostable ChatGPT-like UI that allows you to use Ollama and other AI providers while keeping your chat history, prompts, and other data locally on any computer you control.
What are some alternatives?
YetAnotherChatUI - Yet another ChatGPT UI. Bring your own API key.
llama.cpp - LLM inference in C/C++
kserve - Standardized Serverless ML Inference Platform on Kubernetes
gpt4all - gpt4all: run open-source LLMs anywhere
server - The Triton Inference Server provides an optimized cloud and edge inferencing solution.
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
lookma - LookMa connects Android devices to locally-run LLMs
private-gpt - Interact with your documents using the power of GPT, 100% privately, no data leaks
dali_backend - The Triton backend that allows running GPU-accelerated data pre-processing pipelines implemented in DALI's python API.
llama - Inference code for Llama models
llamafile - Distribute and run LLMs with a single file.
LocalAI - :robot: The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities.