tabby
FLiPStackWeekly
tabby | FLiPStackWeekly | |
---|---|---|
24 | 80 | |
17,315 | 14 | |
6.2% | - | |
9.9 | 9.9 | |
about 13 hours ago | 1 day ago | |
Rust | ||
GNU General Public License v3.0 or later | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
tabby
- Google CodeGemma: Open Code Models Based on Gemma [pdf]
-
What AI assistants are already bundled for Linux?
NixOS just got tabbyml[1] which is built on llama-cpp. Working on systemsd services the weekend and updating latest tabbyml release which supports rocm in addition to cuda
[1] https://github.com/TabbyML/tabby
[2] https://github.com/NixOS/nixpkgs/pull/291744
- FLaNK Stack Weekly 19 Feb 2024
-
Show HN: Tabby back end in 20 Python lines (self-hosted AI coding assistant)
Nice implementation! It should serve as a great reference for a minimal Tabby's backend API. Thank you for sharing it!
Yeah - ultimately, it won't be as performant or feature-rich compared to https://github.com/TabbyML/tabby, but it's still perfect for educational purposes!
- Stable Code 3B: Coding on the Edge
-
Show HN: I built local copilot alternative using Codellama
Looks interesting! What are the main differences between this and https://github.com/TabbyML/tabby ?
-
Ask HN: Who is hiring? (October 2023)
TabbyML | Software Engineer (Rust) | REMOTE
Self-hosted AI coding assistant. An opensource / on-prem alternative to GitHub Copilot.
Project: https://github.com/TabbyML/tabby
Tabby is seeking a Software Engineer proficient in Rust to join our core engineering team. In this role, you will be responsible for developing the following features:
- Show HN: Tabby – AI Coding Assistant Runs on Apple M1/M2 GPU
-
Meta: Code Llama, an AI Tool for Coding
There are a bunch of VSCode extensions that make use of local models. Tabby seems to be the most friendly right now, but I admittedly haven't tried it myself: https://tabbyml.github.io/tabby/
-
CodeCompose: Meta’s AI Coding Assistant
Check out https://github.com/TabbyML/tabby, which is fully self-hostable and comes with niche features. On M1/M2, it offers a convenient single binary deployment, thanks to Rust. You can find the latest release at https://github.com/TabbyML/tabby/releases/tag/latest.
(Disclaimer: I am the author)
FLiPStackWeekly
What are some alternatives?
fauxpilot - FauxPilot - an open-source alternative to GitHub Copilot server
gorilla-cli - LLMs for your CLI
turbopilot - Turbopilot is an open source large-language-model based code completion engine that runs locally on CPU
awk-raycaster - Pseudo-3D shooter written completely in gawk using raycasting technique
refact - WebUI for Fine-Tuning and Self-hosting of Open-Source Large Language Models for Coding
litellm - Call all LLM APIs using the OpenAI format. Use Bedrock, Azure, OpenAI, Cohere, Anthropic, Ollama, Sagemaker, HuggingFace, Replicate (100+ LLMs)
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
modelscope - ModelScope: bring the notion of Model-as-a-Service to life.
aider - aider is AI pair programming in your terminal
create-nifi-pulsar-flink-apps - How to create a real-time scalable streaming app using Apache NiFi, Apache Pulsar and Apache Flink SQL
ollama-ui - Simple HTML UI for Ollama
FLiP-PulsarSummit2022Asia - FLiP-PulsarSummit2022Asia: Pulsar Summit Asia 2022