tabby
codellama
Our great sponsors
tabby | codellama | |
---|---|---|
24 | 9 | |
17,192 | 14,965 | |
6.2% | 8.2% | |
9.9 | 5.5 | |
4 days ago | 25 days ago | |
Rust | Python | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
tabby
- Google CodeGemma: Open Code Models Based on Gemma [pdf]
-
What AI assistants are already bundled for Linux?
NixOS just got tabbyml[1] which is built on llama-cpp. Working on systemsd services the weekend and updating latest tabbyml release which supports rocm in addition to cuda
[1] https://github.com/TabbyML/tabby
[2] https://github.com/NixOS/nixpkgs/pull/291744
- FLaNK Stack Weekly 19 Feb 2024
-
Show HN: Tabby back end in 20 Python lines (self-hosted AI coding assistant)
Nice implementation! It should serve as a great reference for a minimal Tabby's backend API. Thank you for sharing it!
Yeah - ultimately, it won't be as performant or feature-rich compared to https://github.com/TabbyML/tabby, but it's still perfect for educational purposes!
- Stable Code 3B: Coding on the Edge
-
Show HN: I built local copilot alternative using Codellama
Looks interesting! What are the main differences between this and https://github.com/TabbyML/tabby ?
-
Ask HN: Who is hiring? (October 2023)
TabbyML | Software Engineer (Rust) | REMOTE
Self-hosted AI coding assistant. An opensource / on-prem alternative to GitHub Copilot.
Project: https://github.com/TabbyML/tabby
Tabby is seeking a Software Engineer proficient in Rust to join our core engineering team. In this role, you will be responsible for developing the following features:
- Show HN: Tabby – AI Coding Assistant Runs on Apple M1/M2 GPU
-
Meta: Code Llama, an AI Tool for Coding
There are a bunch of VSCode extensions that make use of local models. Tabby seems to be the most friendly right now, but I admittedly haven't tried it myself: https://tabbyml.github.io/tabby/
-
CodeCompose: Meta’s AI Coding Assistant
Check out https://github.com/TabbyML/tabby, which is fully self-hostable and comes with niche features. On M1/M2, it offers a convenient single binary deployment, thanks to Rust. You can find the latest release at https://github.com/TabbyML/tabby/releases/tag/latest.
(Disclaimer: I am the author)
codellama
-
Meta AI releases Code Llama 70B
The github [0] hasn't been fully updated, but it links to a paper [1] that describes how the smaller code llama models were trained. It would be a good guess that this model is similar.
[0] https://github.com/facebookresearch/codellama
-
Open/Local LLM support for MineDojo/Voyager
This k8s application deploys an instance of Voyager along with a Fabric Minecraft server with required fabric mods. It assumes you have a local deployment of a Large Language Model (LLM) with 4K-8K token context length with a compatible OpenAI API, including embeddings support.
-
Code Llama Parameters
I have been playing with code Llama (the 7B python one). It does pretty well, but I don't understand what the parameters in the code mean and how I should modify them to work best on my hardware. I'm looking at the code in: https://github.com/facebookresearch/codellama/blob/main/llama/generation.py.
-
What frameworks or platforms to use for full fine tuning of Code Llama?
Should I use HuggingFace https://huggingface.co/codellama/CodeLlama-34b-hf or grab the model from Facebook https://github.com/facebookresearch/codellama?
- Code Llama Released
-
Meta just released its answer to GitHub Copilot, and it’s free
such rights.
https://github.com/facebookresearch/codellama/blob/main/LICE...
https://github.com/facebookresearch/llama/blob/main/LICENSE
-
Introducing Code Llama: A New Era of AI-Driven Coding
Bringing AI to the coding community: Code Llama is designed to support software engineers across sectors – including research, industry, and open-source projects. You can checkout the Github repo here.
-
Code Llama by MetaAI (released yesterday)
GIthub https://github.com/facebookresearch/codellama
- Meta: Code Llama, an AI Tool for Coding
What are some alternatives?
fauxpilot - FauxPilot - an open-source alternative to GitHub Copilot server
refact - WebUI for Fine-Tuning and Self-hosting of Open-Source Large Language Models for Coding
turbopilot - Turbopilot is an open source large-language-model based code completion engine that runs locally on CPU
llama.cpp - LLM inference in C/C++
lmdeploy - LMDeploy is a toolkit for compressing, deploying, and serving LLMs.
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
smartcat
aider - aider is AI pair programming in your terminal
Voyager - An Open-Ended Embodied Agent with Large Language Models
ollama-ui - Simple HTML UI for Ollama
app-voyager - Kubernetes deployment for Voyager and Fabric Minecraft