mpt-30B-inference
inference
mpt-30B-inference | inference | |
---|---|---|
3 | 2 | |
573 | 2,871 | |
- | 26.5% | |
6.2 | 9.8 | |
11 months ago | 7 days ago | |
Python | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
mpt-30B-inference
- New open-source model with 8k context runs on CPU, outperforms GPT-3
- MPT 30B inference code using CPU
-
[D] Is there an efficient way to make inferences with open-source LLM?
4-bit. I've used this implementation: https://github.com/abacaj/mpt-30B-inference/tree/main
inference
-
GreptimeAI + Xinference - Efficient Deployment and Monitoring of Your LLM Applications
Xorbits Inference (Xinference) is an open-source platform to streamline the operation and integration of a wide array of AI models. With Xinference, you’re empowered to run inference using any open-source LLMs, embedding models, and multimodal models either in the cloud or on your own premises, and create robust AI-driven applications. It provides a RESTful API compatible with OpenAI API, Python SDK, CLI, and WebUI. Furthermore, it integrates third-party developer tools like LangChain, LlamaIndex, and Dify, facilitating model integration and development.
-
🤖 AI Podcast - Voice Conversations🎙 with Local LLMs on M2 Max
Code: https://github.com/xorbitsai/inference/blob/main/examples/AI_podcast.py
What are some alternatives?
rwkv.cpp - INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model
truss - The simplest way to serve AI/ML models in production
vllm - A high-throughput and memory-efficient inference and serving engine for LLMs
agentchain - Chain together LLMs for reasoning & orchestrate multiple large models for accomplishing complex tasks
llm-rp - ✨ Your Custom Offline Role Play with LLM and Stable Diffusion on Mac and Linux (for now) 🧙♂️
ChatGLM2-6B - ChatGLM2-6B: An Open Bilingual Chat LLM | 开源双语对话语言模型
text-generation-inference - Large Language Model Text Generation Inference
h2o-wizardlm - Open-Source Implementation of WizardLM to turn documents into Q:A pairs for LLM fine-tuning
chatdocs - Chat with your documents offline using AI.
aihandler - A simple engine to help run diffusers and transformers models
inference-benchmark - Benchmark for machine learning model online serving (LLM, embedding, Stable-Diffusion, Whisper)