DeepSeek-Coder
code-llama-for-vscode
DeepSeek-Coder | code-llama-for-vscode | |
---|---|---|
8 | 5 | |
5,499 | 516 | |
7.7% | - | |
8.6 | 4.6 | |
about 1 month ago | 9 months ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
DeepSeek-Coder
-
Meta Llama 3
deepseek-coder-instruct 6.7B still looks like is better than llama 3 8B on HumanEval [0], and deepseek-coder-instruct 33B still within reach to run on 32 GB Macbook M2 Max - Lamma 3 70B on the other hand will be hard to run locally unless you really have 128GB ram or more. But we will see in the following days how it performs in real life.
[0] https://github.com/deepseek-ai/deepseek-coder?tab=readme-ov-...
-
Mistral Remove "Committing to open models" from their website
Deepseek (https://github.com/deepseek-ai/DeepSeek-Coder?tab=readme-ov-...) code is MIT and the model license is available too.
- FLaNK Stack 05 Feb 2024
-
Stable Code 3B: Coding on the Edge
https://github.com/deepseek-ai/deepseek-coder
33B Instruct doesn’t beat 6.7B Instruct by much but maybe those % improvements mean more for your usage.
I run 6.7B since I have 16GB RAM.
-
What the heck is so great about this model?
Deepseek Coder: https://github.com/deepseek-ai/DeepSeek-Coder (Best open source coding model right now)
- Deepseek Coder instruct – 6.7B model beats gpt3.5-turbo in coding
- FLaNK Stack Weekly for 13 November 2023
- DeepSeek-Coder: Has anyone tried this one?
code-llama-for-vscode
-
Stable Code 3B: Coding on the Edge
How are people using codellama and this in their workflows?
I found one option: https://github.com/xNul/code-llama-for-vscode
But I'm guessing there are others, and they might differ in how they provide context to the model.
-
LLMs up to 4x Faster With latest Nvidia drivers on Windows
Do you use https://github.com/xNul/code-llama-for-vscode or something else?
Haven’t found any good setup instructions for Linux or my Google skills are failing me.
-
Continue with LocalAI: An alternative to GitHub's Copilot that runs locally
Ollama only works on Mac. Here is a portable option:
https://github.com/xnul/code-llama-for-vscode
- Code Llama for VS Code
- Code Llama for VSCode - A simple API which mocks llama.cpp to enable support for Code Llama with the Continue Visual Studio Code extension. Cross-platform support. No login/key/etc, 100% local.
What are some alternatives?
draw-a-ui - Draw a mockup and generate html for it
ollama-webui - ChatGPT-Style WebUI for LLMs (Formerly Ollama WebUI) [Moved to: https://github.com/open-webui/open-webui]
FT-Merge-Quantize-Infer-CML
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
cucim - cuCIM - RAPIDS GPU-accelerated image processing library
go-llama2 - Llama 2 inference in one file of pure Go
linen.dev - Lightweight Google-searchable Slack alternative for Communities
twinny - The most no-nonsense, locally or API-hosted AI code completion plugin for Visual Studio Code - like GitHub Copilot but completely free and 100% private.
wubloader
Finetune_LLMs - Repo for fine-tuning Casual LLMs
clipea - 📎🟢 Like Clippy but for the CLI. A blazing fast AI helper for your command line
GoLLIE - Guideline following Large Language Model for Information Extraction