uniteai
accelerate
uniteai | accelerate | |
---|---|---|
17 | 18 | |
228 | 7,225 | |
- | 3.8% | |
8.2 | 9.7 | |
5 months ago | 3 days ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
uniteai
-
Can we discuss MLOps, Deployment, Optimizations, and Speed?
I recently went through the same with UniteAI, and had to swap ctransformers back out for llama.cpp
-
Best Local LLM Backend Server Library?
I maintain the uniteai project, and have implemented a custom backend for serving transformers-compatible LLMs. (That file's actually a great ultra-light-weight server if transformers satisfies your needs; one clean file).
-
Show HN: SeaGOAT – local, “AI-based” grep for semantic code search
UniteAI brings together speech recognition and document / code search. The major difference is your UI is your preferred text editor.
https://github.com/freckletonj/uniteai
-
Language Model UXes in 2027
In answer to the same question I built UniteAI https://github.com/freckletonj/uniteai
It's local first, and ties many different AIs into one text editor, any arbitrary text editor in fact.
It does speech recognition, which isn't useful for writing code, but is useful for generating natural language LLM prompts and comments.
It does CodeLlama (and any HuggingFace-based language model)
It does ChatGPT
It does Retrieval Augmented Gen, which is where you have a query that searches through eg PDFs, Youtube transcripts, code bases, HTML, local or online files, Arxiv papers, etc. It then surfaces passages relevant to your query, that you can then further use in conjunction with an LLM.
I don't know how mainstream LLM-powered software looks, but for devs, I love this format of tying in the best models as they're released into one central repo where they can all play off each others' strengths.
-
Can I get a pointer on Kate LSP Clients? I'm trying to add a brand new one.
I'm working on UniteAI, a project to tie different AI capabilities into the editor, and it has a clean LSP Server.
-
UniteAI, collab with AIs in your text editor by writing alongside each other
*TL;DR*: chat with AI, code with AI, speak to AI (voice-to-text + vice versa), have AI search huge corpora or websites for you, all via an interface of collaborating on a text doc together in the editor you use now.
*Motivation*
I find the last year of AI incredibly heartening. Researchers are still regularly releasing SoTA models in disparate domains. Meta is releasing powerful Llama under generous provisions (As is the UAE with Falcon?!). And the Open Source community has shown a tidal wave of interest and effort into building things out of these tools (112k repos on GH mentioning ML!).
Facing this deluge of valuable things that communities are shepherding into the world, I wanted to incorporate them into my workflows, which as a software engineer, means my text editor.
*UniteAI*
So I started *UniteAI* https://github.com/freckletonj/uniteai, an Apache-2.0 licensed tool.
Check out the screencasts: https://github.com/freckletonj/uniteai#some-core-features
This project:
* Ties in to *any editor* via Language Server Protocol. Like collaborating in G-Docs, you collab with whatever AI directly in the document, all of you writing alongside each other concurrently.
* Like Copilot / Cursor, it can write code/text right in your doc.
* It supports *any Locally runnable model* (Llama family, Falcon, Finetunes, the 21k available models on HF, etc.)
* It supports *OpenAI/ChatGPT* via API key.
* *Speech-to-Text*, useful for writing prompts to your LLM
* You can do *Semantic Search* (Retrieval Augmented Generation) on many sources: local files, Arxiv, youtube transcripts, Project Gutenberg books, any online HTML, basically if you give it a URI, it can probably use it.
* You can trigger features easily via [key combos](https://github.com/freckletonj/uniteai#keycombos).
* Written in Python, so, much more generic than writing a bespoke `some_specific_editor` plugin.
*Caveat*
Since it always comes up, *AI is not perfect*. AI is a tool to augment your time, not replace it. It hallucinates, it lies, it bullshits, it writes bad code, it gives dangerous advice.
But can still do many useful things, and for me it is a *huge force multiplier.*
*You need a Human In The Loop*, which is why it's nice to work together iteratively on a text document, per, this project. You keep it on track.
*Why is this interesting*
These tools play well when used together:
* *Code example:* I can Voice-to-Text a function comment then send that to an LLM to write the function.
* *Code example 2:* I can chit chat about project architecture plans, and strategies, and libraries I should consider.
* *Documentation example:* I can retrieve relevant sections of my city's building code with a natural language query, then send that to an LLM to expound upon.
* *Authorship example*: I can have my story arcs and character dossiers in some markdown file, and use that guidance to contextualize an AI as it works with me for writing a story.
* *Entertainment example*: I told my AI it was a Dungeon Master, then over breakfast with friends, used Voice-to-Text and Text-to-Wizened-Wizard-Voice, and played a hillarious game. I still had to drive all this via a text doc, and handy key combos.
*RFC*
Installation instructions are on the repo: https://github.com/freckletonj/uniteai#quickstart-installing...
This is still nascent, and I welcome all feedback, positive or critical.
We have a community linked on the repo which you're invited to join.
I'd love love to chat with people who like this idea, use it, want to see other features, want to contribute their effort, want to file bug reports, etc.
A big part of my motivation in this is to socialize with like-minds, and build something cool.
*Thanks for checking this out!*
- UniteAI: In an editor, self hosted llama, code llama, mic voice transcription, and ai-powered web/document search
- [ UniteAI ]: "your AIs in your editor". I've been bustin my butt, and feel like it's finally worth presenting to the world.
-
Show HN: Use Code Llama as Drop-In Replacement for Copilot Chat
[UniteAI](https://github.com/freckletonj/uniteai) I think fits the bill for you.
This is my project, where the goal is to Unite your AI-stack inside your editor (so, Speech-to-text, Local LLMs, Chat GPT, Retrieval Augmented Gen, etc).
It's built atop a Language Server, so, while no one has made an IntelliJ client yet, it's simple to. I'll help you do it if you make a GH Issue!
-
UniteAI: Your AI-Stack in your Editor
UniteAI (github)
accelerate
-
Can we discuss MLOps, Deployment, Optimizations, and Speed?
accelerate is a best-in-class lib for deploying models, especially across multi-gpu and multi-node.
-
Code Llama - The Hugging Face Edition
In the coming days, we'll work on sharing scripts to train models, optimizations for on-device inference, even nicer demos (and for more powerful models), and more. Feel free to like our GitHub repos (transformers, peft, accelerate). Enjoy!
-
What are the current fastest multi-gpu inference frameworks?
So I rent a cloud server today to try out some of the recent LLMs like falcon and vicuna. I started with huggingface's generate API using accelerate. It got about 2 instances/s with 8 A100 40GB GPUs which I think is a bit slow. I was using batch size = 1 since I do not know how to do multi-batch inference using the .generate API. I did torch.compile + bf16 already. Do we have an even faster multi-gpu inference framework? I have 8 GPUs so I was thinking about MUCH faster speed like ~10 or 20 instances per second (or is it possible at all? I am pretty new to this field).
-
Looking at lefnire's suggestion of splitting huggingface batches per gradient_accumulation_steps
Looking through https://github.com/huggingface/accelerate/tree/main/src/accelerate/utils/ I think it might be feasible, but will require some modifications to:
-
Have to abandon my (almost) finished LLaMA-API-Inference server. If anybody finds it useful and wants to continue, the repo is yours. :)
As /u/RabbitHole32 already mentioned, the speed increase stems from a patch which modifies, how a certain, large tensor is distributed between the GPU's. The patch was created by /u/emvw7yf. Here you can find the respective GitHub issue: https://github.com/huggingface/accelerate/issues/1394
-
Help please! SD installation broken
::pip install git+https://github.com/huggingface/accelerate
-
Batch Controlnet
pip install controlnet_aux pip install diffusers transformers git+https://github.com/huggingface/accelerate.git
-
[D] Large Language Models feasible to run on 32GB RAM / 8 GB VRAM / 24GB VRAM
Try to use both GPUs with this one: https://github.com/huggingface/accelerate https://huggingface.co/docs/accelerate/usage_guides/big_modeling https://huggingface.co/blog/accelerate-large-models Maybe it will help (the last link is clearer IMHO).
-
Fine Tuning Stable Diffusion with Dreambooth from Within My Python Code
I read through this page on accelerate, but it's not clear to me how the arguments such as instance_prompt gets passed in.
-
What does ACCELERATE do in AUTOMATIC1111?
To activate it you have to uncomment webui-user.sh line 44 and adding set ACCELERATE="True" to webui-user.bat. It seems to use huggingface/accelerate (Microsoft DeepSpeed, ZeRO paper) ACCELERATE
What are some alternatives?
unsloth - Finetune Llama 3, Mistral, Phi & Gemma LLMs 2-5x faster with 80% less memory
DeepSpeed - DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
continue - ⏩ Continue enables you to create your own AI code assistant inside your IDE. Keep your developers in flow with open-source VS Code and JetBrains extensions
bitsandbytes - Accessible large language models via k-bit quantization for PyTorch.
chatcraft.org - Developer-oriented ChatGPT clone
FlexGen - Running large language models like OPT-175B/GPT-3 on a single GPU. Focusing on high-throughput generation. [Moved to: https://github.com/FMInference/FlexGen]
semantic-code-search - Search your codebase with natural language • CLI • No data leaves your computer
horovod - Distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.
SeaGOAT - local-first semantic code search engine
ChatGLM-6B - ChatGLM-6B: An Open Bilingual Dialogue Language Model | 开源双语对话语言模型
gw2combat - A GW2 combat simulator using entity-component-system design