benchllama
code-llama-for-vscode
benchllama | code-llama-for-vscode | |
---|---|---|
2 | 5 | |
18 | 516 | |
- | - | |
8.0 | 4.6 | |
3 months ago | 9 months ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
benchllama
code-llama-for-vscode
-
Stable Code 3B: Coding on the Edge
How are people using codellama and this in their workflows?
I found one option: https://github.com/xNul/code-llama-for-vscode
But I'm guessing there are others, and they might differ in how they provide context to the model.
-
LLMs up to 4x Faster With latest Nvidia drivers on Windows
Do you use https://github.com/xNul/code-llama-for-vscode or something else?
Haven’t found any good setup instructions for Linux or my Google skills are failing me.
-
Continue with LocalAI: An alternative to GitHub's Copilot that runs locally
Ollama only works on Mac. Here is a portable option:
https://github.com/xnul/code-llama-for-vscode
- Code Llama for VS Code
- Code Llama for VSCode - A simple API which mocks llama.cpp to enable support for Code Llama with the Continue Visual Studio Code extension. Cross-platform support. No login/key/etc, 100% local.
What are some alternatives?
ollama-webui - ChatGPT-Style WebUI for LLMs (Formerly Ollama WebUI) [Moved to: https://github.com/open-webui/open-webui]
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
go-llama2 - Llama 2 inference in one file of pure Go
twinny - The most no-nonsense, locally or API-hosted AI code completion plugin for Visual Studio Code - like GitHub Copilot but completely free and 100% private.
Finetune_LLMs - Repo for fine-tuning Casual LLMs
GoLLIE - Guideline following Large Language Model for Information Extraction
AnglE - Angle-optimized Text Embeddings | 🔥 SOTA on STS and MTEB Leaderboard
Fooocus - Focus on prompting and generating
debugpy-run - Finds and runs debugpy for VS Code "remote attach" command line debugging.
realtime-bakllava - llama.cpp with BakLLaVA model describes what does it see
llama.cpp - LLM inference in C/C++