-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
llm-vscode-inference-server
An endpoint server for efficiently serving quantized open-source LLMs for code.
I don’t recommend that, since that uses the cloud for the actual inference by default (and they provide no guidance for changing that).
I don’t consider cloud inference to count as getting it working “locally” as requested by the comment above yours.
Refact works nicely and works locally, but the challenge with any new model is making it be supported by the existing software: https://github.com/smallcloudai/refact/
Requests for code generation are made via an HTTP request.
You can use the Hugging Face Inference API or your own HTTP endpoint, provided it adheres to the API specified here[1] or here[2]."
It's fairly easy to use your own model locally with the plugin. You can just use the one of the community developed inference servers, which are listed at the bottom of the page, but here's the links[3] to both[4].
[1]: https://huggingface.co/docs/api-inference/detailed_parameter...
[2]: https://huggingface.github.io/text-generation-inference/#/Te...
[3]: https://github.com/wangcx18/llm-vscode-inference-server
[4]: https://github.com/wangcx18/llm-vscode-inference-server
Related posts
-
Ask HN: How do you develop and maintain a good note-taking habit?
-
Rabbit R1 can be run on a Android device
-
Flags Are Not Languages
-
Download your Learn course content with this free and open-source tool. All you need is a working computer and basic Python knowledge, and you can save a local copy of your Learn courses' content for future reference after the end of the term.
-
What Are HTML Meta Tags And What Is Their Importance?