tabby
dstack
tabby | dstack | |
---|---|---|
26 | 17 | |
17,534 | 1,110 | |
4.5% | 5.1% | |
9.9 | 9.8 | |
about 9 hours ago | 7 days ago | |
Rust | Python | |
GNU General Public License v3.0 or later | Mozilla Public License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
tabby
-
IBM Granite: A Family of Open Foundation Models for Code Intelligence
https://github.com/TabbyML/tabby can run self-hosted AI coding assistants. I tried it a while ago and it worked with Nvim pretty easily. There is a VS code extension too. The extension will just sort of "read" with you and provide suggestions from time to time. Anytime the suggestion is good you can press some key ( by default) to accept it. It's basically autocomplete on steroids.
- Google CodeGemma: Open Code Models Based on Gemma [pdf]
-
What AI assistants are already bundled for Linux?
NixOS just got tabbyml[1] which is built on llama-cpp. Working on systemsd services the weekend and updating latest tabbyml release which supports rocm in addition to cuda
[1] https://github.com/TabbyML/tabby
[2] https://github.com/NixOS/nixpkgs/pull/291744
- FLaNK Stack Weekly 19 Feb 2024
-
Show HN: Tabby back end in 20 Python lines (self-hosted AI coding assistant)
Nice implementation! It should serve as a great reference for a minimal Tabby's backend API. Thank you for sharing it!
Yeah - ultimately, it won't be as performant or feature-rich compared to https://github.com/TabbyML/tabby, but it's still perfect for educational purposes!
- Stable Code 3B: Coding on the Edge
-
Show HN: I built local copilot alternative using Codellama
Looks interesting! What are the main differences between this and https://github.com/TabbyML/tabby ?
-
Ask HN: Who is hiring? (October 2023)
TabbyML | Software Engineer (Rust) | REMOTE
Self-hosted AI coding assistant. An opensource / on-prem alternative to GitHub Copilot.
Project: https://github.com/TabbyML/tabby
Tabby is seeking a Software Engineer proficient in Rust to join our core engineering team. In this role, you will be responsible for developing the following features:
- Show HN: Tabby ā AI Coding Assistant Runs on Apple M1/M2 GPU
-
Meta: Code Llama, an AI Tool for Coding
There are a bunch of VSCode extensions that make use of local models. Tabby seems to be the most friendly right now, but I admittedly haven't tried it myself: https://tabbyml.github.io/tabby/
dstack
-
Pyinfra: Automate Infrastructure Using Python
We build a similar tool except we focus on AI workloads. Also support on-prem clusters now in addition to GPU clouds. https://github.com/dstackai/dstack
-
Show HN: Open-source alternative to HashiCorp/IBM Vault
Not exactly this, but something related. At https://github.com/dstackai/dstack, we build an alternative to K8S for AI infra.
-
Ask HN: How does deploying a fine-tuned model work
You can use https://github.com/dstackai/dstack to deploy your model to the most affordable GPU clouds. It supports auto-scaling and other features.
Disclaimer: Iām the creator of dstack.
- FLaNK Stack Weekly 19 Feb 2024
-
Show HN: I Built an Open Source API with Insanely Fast Whisper and Fly GPUs
Great job on the project! It looks fantastic. Thanks to your post, I discovered Fly's GPUs. We are currently developing a platform called https://github.com/dstackai/dstack that enables users to run any model on any cloud. I am curious if it would be possible to add support for Fly.io as well. If you are interested in collaborating on this, please let me know!
- Show HN: Dstack ā an open-source engine for running GPU workloads
-
[P] I built a tool to compare cloud GPUs. How should I improve it?
I also noticed that the creator of this app, dstack, is affiliated with Tensordock, the top results for most if not all queries. If that's the case, perhaps a direct link to the cheapest machine could be provided? I haven't used Tensordock, so I don't know if this is mechanically possible.
-
Running dev environments and ML tasks cost-effectively in any cloud
Here's the repository with all the important links, including documentation, examples, and more: https://github.com/dstackai/dstack
-
Dstack Hub
Hey everyone, I'm happy to release dstack Hub, an open-source tool that helps teams manage their ML workflows more effectively without vendor lock-in.
dstack Hub extend dstack [1] with workflow scheduling capabilities and user management. Here's how it works: run dstack Hub via Docker, use its UI to configure projects and cloud credentials, then pass the URL and personal token to the dstack CLI. Now, you can run workflows through the CLI and Hub will orchestrate them in the cloud on your behalf.
This is a beta release and we plan to continuously improve it. We'd love to hear your feedback and answer any questions!
[1] https://github.com/dstackai/dstack
-
Running Stable Diffusion Locally & in Cloud with Diffusers & dstack
To help you overcome this challenge, we have written an article to guide you through the simple steps of using both diffusers and dstack to generate images from prompts, both locally and in the cloud, using a simple example.
What are some alternatives?
fauxpilot - FauxPilot - an open-source alternative to GitHub Copilot server
msdocs-python-django-azure-container-apps - Python web app using Django that can be deployed to Azure Container Apps.
turbopilot - Turbopilot is an open source large-language-model based code completion engine that runs locally on CPU
dstack-examples - A collection of examples demonstrating how to use dstack
refact - WebUI for Fine-Tuning and Self-hosting of Open-Source Large Language Models for Coding
zenml - ZenML š: Build portable, production-ready MLOps pipelines. https://zenml.io.
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
flyte - Scalable and flexible workflow orchestration platform that seamlessly unifies data, ML and analytics stacks.
aider - aider is AI pair programming in your terminal
lambdapi - Serverless runtime environment tailored for code produced by LLMs. Automatic API generation from your code, support for multiple programming languages, and integrated file and database storage solutions.
ollama-ui - Simple HTML UI for Ollama
metaflow - :rocket: Build and manage real-life ML, AI, and data science projects with ease!