inference-benchmark
inference
inference-benchmark | inference | |
---|---|---|
1 | 2 | |
26 | 2,799 | |
- | 24.6% | |
6.4 | 9.8 | |
11 months ago | 3 days ago | |
Python | Python | |
- | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
inference-benchmark
-
[D] Handling Concurrent Request for ML Model API
I have done some benchmarks before: https://github.com/tensorchord/inference-benchmark
inference
-
GreptimeAI + Xinference - Efficient Deployment and Monitoring of Your LLM Applications
Xorbits Inference (Xinference) is an open-source platform to streamline the operation and integration of a wide array of AI models. With Xinference, you’re empowered to run inference using any open-source LLMs, embedding models, and multimodal models either in the cloud or on your own premises, and create robust AI-driven applications. It provides a RESTful API compatible with OpenAI API, Python SDK, CLI, and WebUI. Furthermore, it integrates third-party developer tools like LangChain, LlamaIndex, and Dify, facilitating model integration and development.
-
🤖 AI Podcast - Voice Conversations🎙 with Local LLMs on M2 Max
Code: https://github.com/xorbitsai/inference/blob/main/examples/AI_podcast.py
What are some alternatives?
agentchain - Chain together LLMs for reasoning & orchestrate multiple large models for accomplishing complex tasks
truss - The simplest way to serve AI/ML models in production
ChatFred - Alfred workflow using ChatGPT, DALL·E 2 and other models for chatting, image generation and more.
mosec - A high-performance ML model serving framework, offers dynamic batching and CPU/GPU pipelines to fully exploit your compute machine
ChatGLM2-6B - ChatGLM2-6B: An Open Bilingual Chat LLM | 开源双语对话语言模型
text-generation-inference - Large Language Model Text Generation Inference
h2o-wizardlm - Open-Source Implementation of WizardLM to turn documents into Q:A pairs for LLM fine-tuning
mpt-30B-inference - Run inference on MPT-30B using CPU
aihandler - A simple engine to help run diffusers and transformers models
rwkv.cpp - INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model