vllm-rocm
AtomGPT
vllm-rocm | AtomGPT | |
---|---|---|
1 | 1 | |
78 | 189 | |
- | - | |
9.9 | 10.0 | |
4 days ago | 10 months ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
vllm-rocm
AtomGPT
What are some alternatives?
local-llm-function-calling - A tool for generating function arguments and choosing what function to call with local LLMs
realtime-bakllava - llama.cpp with BakLLaVA model describes what does it see
vllm - A high-throughput and memory-efficient inference and serving engine for LLMs
pinferencia - Python + Inference - Model Deployment library in Python. Simplest model inference server ever.
chatgpt-extractive-shortener - Shortens a paragraph of text with ChatGPT, using successive rounds of word-level extractive summarization.
text-generation-inference - Large Language Model Text Generation Inference
GoLLIE - Guideline following Large Language Model for Information Extraction
mosec - A high-performance ML model serving framework, offers dynamic batching and CPU/GPU pipelines to fully exploit your compute machine
safe-rlhf - Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback
ray-llm - RayLLM - LLMs on Ray