text-generation-inference
refact
text-generation-inference | refact | |
---|---|---|
29 | 34 | |
7,881 | 1,422 | |
6.2% | 2.6% | |
9.6 | 9.8 | |
5 days ago | about 22 hours ago | |
Python | JavaScript | |
Apache License 2.0 | BSD 3-clause "New" or "Revised" License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
text-generation-inference
- FLaNK AI-April 22, 2024
-
Zephyr 141B, a Mixtral 8x22B fine-tune, is now available in Hugging Chat
I wanted to write that TGI inference engine is not Open Source anymore, but they have reverted the license back to Apache 2.0 for the new version TGI v2.0: https://github.com/huggingface/text-generation-inference/rel...
Good news!
- Hugging Face reverts the license back to Apache 2.0
- HuggingFace text-generation-inference is reverting to Apache 2.0 License
- FLaNK Stack 05 Feb 2024
- Is there any open source app to load a model and expose API like OpenAI?
-
AI Code assistant for about 50-70 users
Setting up a server for multiple users is very different from setting up LLM for yourself. A safe bet would be to just use TGI, which supports continuous batching and is very easy to run via Docker on your server. https://github.com/huggingface/text-generation-inference
-
LocalPilot: Open-source GitHub Copilot on your MacBook
Okay, I actually got local co-pilot set up. You will need these 4 things.
1) CodeLlama 13B or another FIM model https://huggingface.co/codellama/CodeLlama-13b-hf. You want "Fill in Middle" models because you're looking at context on both sides of your cursor.
2) HuggingFace llm-ls https://github.com/huggingface/llm-ls A large language mode Language Server (is this making sense yet)
3) HuggingFace inference framework. https://github.com/huggingface/text-generation-inference At least when I tested you couldn't use something like llama.cpp or exllama with the llm-ls, so you need to break out the heavy duty badboy HuggingFace inference server. Just config and run. Now config and run llm-ls.
4) Okay, I mean you need an editor. I just tried nvim, and this was a few weeks ago, so there may be better support. My expereicen was that is was full honest to god copilot. The CodeLlama models are known to be quite good for its size. The FIM part is great. Boilerplace works so much easier with the surrounding context. I'd like to see more models released that can work this way.
-
Mistral 7B Paper on ArXiv
A simple microservice would be https://github.com/huggingface/text-generation-inference .
Works flawlessly in Docker on my Windows machine, which is extremely shocking.
-
best way to serve llama V2 (llama.cpp VS triton VS HF text generation inference)
I am wondering what is the best / most cost-efficient way to serve llama V2. - llama.cpp (is it production ready or just for playing around?) ? - Triton inference server ? - HF text generation inference ?
refact
- RefactAI: Use best-in-class LLMs for coding in your IDE
-
Supercharge Your Dev Workflow: How Refact's AI-powered Code Completion Boosts Developer Productivity
With over 1.3k stars on GitHub, more than 40k downloads and installs on both VS Code and JetBrains IDEs, and more than 50 positive reviews, it is worth saying that Refact is part of the best product in the AI coding assistant market.
-
What do you use to run your models?
On vscode i sometimes use continue.dev and refact.ai just for fun and they are great!
-
AI Code assistant for about 50-70 users
Refact was made for this: https://github.com/smallcloudai/refact
- Free WebUI for Fine-Tuning and Self-Hosting Open-Source LLMs for Coding
-
LocalPilot: Open-source GitHub Copilot on your MacBook
You should check-out [refact.ai](https://github.com/smallcloudai/refact). It has both autocomplete and chat. It's in active development, with lots of new features coming soon (context search, fine-tuning for larger models, etc)
-
Replit's new AI Model now available on Hugging Face
I don’t recommend that, since that uses the cloud for the actual inference by default (and they provide no guidance for changing that).
I don’t consider cloud inference to count as getting it working “locally” as requested by the comment above yours.
Refact works nicely and works locally, but the challenge with any new model is making it be supported by the existing software: https://github.com/smallcloudai/refact/
- Refact.ai 1.0.0 Released
-
📝 🚀 Creating our first documentation from scratch using Astro and Refact AI coding assistant
Previously, we used Astro for our refact.ai website and wanted to stay within the Astro ecosystem for the documentation.
-
🤖We trained a small 1.6b code model and you can use it as a personal copilot in Refact for free🤖
Refact LLM can be easily integrated into existing developers workflows with an open-source docker container and VS Code and JetBrains plugins. With Refact's intuitive user interface, developers can utilize the model easily for a variety of coding tasks. Finetune is available in the self-hosting (docker) and Enterprise versions, making suggestions more relevant for your private codebase.
What are some alternatives?
llama-cpp-python - Python bindings for llama.cpp
tabby - Self-hosted AI coding assistant
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
fauxpilot - FauxPilot - an open-source alternative to GitHub Copilot server
exllama - A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.
basaran - Basaran is an open-source alternative to the OpenAI text completion API. It provides a compatible streaming API for your Hugging Face Transformers-based text generation models.
FlexGen - Running large language models on a single GPU for throughput-oriented scenarios.
developer - the first library to let you embed a developer agent in your own app!
vllm - A high-throughput and memory-efficient inference and serving engine for LLMs
supervision - We write your reusable computer vision tools. 💜