code-llama-for-vscode
llama-gpt
code-llama-for-vscode | llama-gpt | |
---|---|---|
5 | 7 | |
523 | 10,464 | |
- | 1.6% | |
4.6 | 7.4 | |
10 months ago | about 2 months ago | |
Python | TypeScript | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
code-llama-for-vscode
-
Stable Code 3B: Coding on the Edge
How are people using codellama and this in their workflows?
I found one option: https://github.com/xNul/code-llama-for-vscode
But I'm guessing there are others, and they might differ in how they provide context to the model.
-
LLMs up to 4x Faster With latest Nvidia drivers on Windows
Do you use https://github.com/xNul/code-llama-for-vscode or something else?
Haven’t found any good setup instructions for Linux or my Google skills are failing me.
-
Continue with LocalAI: An alternative to GitHub's Copilot that runs locally
Ollama only works on Mac. Here is a portable option:
https://github.com/xnul/code-llama-for-vscode
- Code Llama for VS Code
- Code Llama for VSCode - A simple API which mocks llama.cpp to enable support for Code Llama with the Continue Visual Studio Code extension. Cross-platform support. No login/key/etc, 100% local.
llama-gpt
- FLaNK Stack Weekly 28 August 2023
-
Continue with LocalAI: An alternative to GitHub's Copilot that runs locally
wodner if you can pair with https://github.com/getumbrel/llama-gpt
-
Show HN: LlamaGPT – Self-hosted, offline, private AI chatbot, powered by Llama 2
I put up a draft PR to demo how to run it on a GPU: https://github.com/getumbrel/llama-gpt/pull/11
It breaks other things like model downloading, but once I got it to a working state for myself, I figured why not put it up there in case its useful. If I have time, I'll try to rework it a little bit with more parameters and less dockerfile repetition to fit the main project better.
- llama-gpt - A self-hosted, offline, ChatGPT-like chatbot. Powered by Llama 2. 100% private, with no data leaving your device
What are some alternatives?
ollama-webui - ChatGPT-Style WebUI for LLMs (Formerly Ollama WebUI) [Moved to: https://github.com/open-webui/open-webui]
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
serge - A web interface for chatting with Alpaca through llama.cpp. Fully dockerized, with an easy to use API.
twinny - The most no-nonsense, locally or API-hosted AI code completion plugin for Visual Studio Code - like GitHub Copilot but completely free and 100% private.
gpt4all - gpt4all: run open-source LLMs anywhere
go-llama2 - Llama 2 inference in one file of pure Go
trulens - Evaluation and Tracking for LLM Experiments
Finetune_LLMs - Repo for fine-tuning Casual LLMs
seamless_communication - Foundational Models for State-of-the-Art Speech and Text Translation
AnglE - Train and Infer Powerful Sentence Embeddings with AnglE | 🔥 SOTA on STS and MTEB Leaderboard
prettymapp - 🖼️ Create beautiful maps from OpenStreetMap data in a streamlit webapp