vllm-rocm
vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs (by EmbeddedLLM)
local-llm-function-calling
A tool for generating function arguments and choosing what function to call with local LLMs (by rizerphe)
vllm-rocm | local-llm-function-calling | |
---|---|---|
1 | 1 | |
78 | 278 | |
- | - | |
9.9 | 7.0 | |
4 days ago | 2 months ago | |
Python | Python | |
Apache License 2.0 | MIT License |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
vllm-rocm
Posts with mentions or reviews of vllm-rocm.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-12-10.
local-llm-function-calling
Posts with mentions or reviews of local-llm-function-calling.
We have used some of these posts to build our list of alternatives
and similar projects.
-
Tell HN: OpenAI still has a moat, it's called function calling and its API
hello? https://github.com/rizerphe/local-llm-function-calling
What are some alternatives?
When comparing vllm-rocm and local-llm-function-calling you can also consider the following projects:
vllm - A high-throughput and memory-efficient inference and serving engine for LLMs
llmflows - LLMFlows - Simple, Explicit and Transparent LLM Apps
pinferencia - Python + Inference - Model Deployment library in Python. Simplest model inference server ever.
funcchain - ⛓️ build cognitive systems, pythonic
text-generation-inference - Large Language Model Text Generation Inference
AtomGPT - 中英文预训练大模型,目标与ChatGPT的水平一致
mosec - A high-performance ML model serving framework, offers dynamic batching and CPU/GPU pipelines to fully exploit your compute machine