code-llama-for-vscode
koboldcpp
code-llama-for-vscode | koboldcpp | |
---|---|---|
5 | 180 | |
523 | 4,187 | |
- | - | |
4.6 | 10.0 | |
10 months ago | 5 days ago | |
Python | C++ | |
MIT License | GNU Affero General Public License v3.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
code-llama-for-vscode
-
Stable Code 3B: Coding on the Edge
How are people using codellama and this in their workflows?
I found one option: https://github.com/xNul/code-llama-for-vscode
But I'm guessing there are others, and they might differ in how they provide context to the model.
-
LLMs up to 4x Faster With latest Nvidia drivers on Windows
Do you use https://github.com/xNul/code-llama-for-vscode or something else?
Haven’t found any good setup instructions for Linux or my Google skills are failing me.
-
Continue with LocalAI: An alternative to GitHub's Copilot that runs locally
Ollama only works on Mac. Here is a portable option:
https://github.com/xnul/code-llama-for-vscode
- Code Llama for VS Code
- Code Llama for VSCode - A simple API which mocks llama.cpp to enable support for Code Llama with the Continue Visual Studio Code extension. Cross-platform support. No login/key/etc, 100% local.
koboldcpp
- Any Online Communities on Local/Home AI?
- Koboldcpp-1.62.1 adds support for Command-R+
- Show HN: I made an app to use local AI as daily driver
-
Easiest way to show my model to my mom?
FYI this is the easiest way to host on the horde: https://github.com/LostRuins/koboldcpp
- IT Veteran... why am I struggling with all of this?
- What do you use to run your models?
- ByteDance AI researcher suggests that open source model more powerful than Gemini to be released soon
- i need some help guys
-
[Guide] How install KoboldAI in Android via Termux (Update 04-12-2023)
For more information of Koboldcpp look this guide: https://github.com/LostRuins/koboldcpp/wiki
-
SillyTavern 1.10.10 has been released
Out of curiosity, is there a specific reason for this? The most popular fork KoboldCpp is in active development, and was the first to adopt the Min P sampler, and even distincts itself with the context shift feature. Just wondering what this means for the future. Thanks!
What are some alternatives?
ollama-webui - ChatGPT-Style WebUI for LLMs (Formerly Ollama WebUI) [Moved to: https://github.com/open-webui/open-webui]
KoboldAI
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
twinny - The most no-nonsense, locally or API-hosted AI code completion plugin for Visual Studio Code - like GitHub Copilot but completely free and 100% private.
KoboldAI - KoboldAI is generative AI software optimized for fictional use, but capable of much more!
go-llama2 - Llama 2 inference in one file of pure Go
TavernAI - Atmospheric adventure chat for AI language models (KoboldAI, NovelAI, Pygmalion, OpenAI chatgpt, gpt-4)
Finetune_LLMs - Repo for fine-tuning Casual LLMs
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
AnglE - Train and Infer Powerful Sentence Embeddings with AnglE | 🔥 SOTA on STS and MTEB Leaderboard
SillyTavern - LLM Frontend for Power Users. [Moved to: https://github.com/SillyTavern/SillyTavern]