llm-vscode-inference-server

An endpoint server for efficiently serving quantized open-source LLMs for code. (by wangcx18)

Llm-vscode-inference-server Alternatives

Similar projects and alternatives to llm-vscode-inference-server

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a better llm-vscode-inference-server alternative or higher similarity.

llm-vscode-inference-server reviews and mentions

Posts with mentions or reviews of llm-vscode-inference-server. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-10-11.
  • Replit's new AI Model now available on Hugging Face
    3 projects | news.ycombinator.com | 11 Oct 2023
    Requests for code generation are made via an HTTP request.

    You can use the Hugging Face Inference API or your own HTTP endpoint, provided it adheres to the API specified here[1] or here[2]."

    It's fairly easy to use your own model locally with the plugin. You can just use the one of the community developed inference servers, which are listed at the bottom of the page, but here's the links[3] to both[4].

    [1]: https://huggingface.co/docs/api-inference/detailed_parameter...

    [2]: https://huggingface.github.io/text-generation-inference/#/Te...

    [3]: https://github.com/wangcx18/llm-vscode-inference-server

    [4]: https://github.com/wangcx18/llm-vscode-inference-server

Stats

Basic llm-vscode-inference-server repo stats
1
44
5.3
7 months ago

Sponsored
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com