Our great sponsors
-
llama-gpt
A self-hosted, offline, ChatGPT-like chatbot. Powered by Llama 2. 100% private, with no data leaving your device. New: Code Llama support!
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
code-llama-for-vscode
Use Code Llama with Visual Studio Code and the Continue extension. A local LLM alternative to GitHub Copilot.
-
twinny
The most no-nonsense, locally or API-hosted AI code completion plugin for Visual Studio Code - like GitHub Copilot but completely free and 100% private.
wodner if you can pair with https://github.com/getumbrel/llama-gpt
Continue has a great guide on using the new Code Llama model launched by Facebook last week: https://continue.dev/docs/walkthroughs/codellama
Continue also works with various backends and fine-tuned versions of Code Llama. E.g. for a local experience with GPU acceleration on macOS, continue can be used with Ollama (https://github.com/jmorganca/ollama):
ollama pull codellama
You can absolutely run LLMs without a GPU, but you need to set expectations for performance. Some projects to look into are
* llama.cpp - https://github.com/ggerganov/llama.cpp
* https://github.com/LostRuins/koboldcpp
Ollama only works on Mac. Here is a portable option:
https://github.com/xnul/code-llama-for-vscode