unsloth
uniteai
unsloth | uniteai | |
---|---|---|
15 | 17 | |
8,974 | 219 | |
42.8% | - | |
9.4 | 8.2 | |
3 days ago | 4 months ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
unsloth
-
Ask HN: Most efficient way to fine-tune an LLM in 2024?
Gemma 7b is 2.4x faster than HF + FA2.
Check out https://github.com/unslothai/unsloth for full benchmarks!
-
Gemma doesn't suck anymore – 8 bug fixes
Here are the missing links:
* Gemma, a family of open models from Google: https://ai.google.dev/gemma
* Unsloth is a tool/method for training models faster (IIUC): https://github.com/unslothai/unsloth
-
AMD ROCm Software Blogs
Thanks! Again, partnerships over customers. If you're experienced and have the technical chops to make a MI300x sing, we want to work with you. Our model is that we are the capex/opex investor for businesses. As much as I love software, Hot Aisle is more of a hardware business. Running super high end large scale compute is an extreme challenge in itself. We are less interested in building the software side of things and want to foster those who can focus on that side.
https://github.com/unslothai/unsloth/issues/160
https://github.com/search?q=repo%3Apredibase%2Florax+rocm&ty...
https://github.com/sgl-project/sglang/issues/157
https://github.com/casper-hansen/AutoAWQ (supports rocm)
-
Show HN: We got fine-tuning Mistral-7B to not suck
Unsloth’s colab notebooks for fine-tuning Mistral-7B are super easy to use and run fine in just about any colab instance:
https://github.com/unslothai/unsloth
It’s my default now for experimenting and basic training. If I want to get into the weeds with the training, I use axolotl, but 9/10, it’s not really necessary.
-
Mistral 7B Fine-Tune Optimized
If anyone wants to finetune their own Mistral 7b model 2.2x faster and use 62% less memory - give our open source package Unsloth a try! https://github.com/unslothai/unsloth a try! :)
-
Has anyone tried out the ASPEN-Framework for LoRA Fine-Tuning yet and can share their experience?
https://github.com/unslothai/unsloth seems good and more relevant to your aims perhaps but I haven't tried it.
-
Can we discuss MLOps, Deployment, Optimizations, and Speed?
The unsloth project offers some low-level optimizations for Llama et al, and as of today some prelim Mistral work (which I heard is the llama architecture?)
- Show HN: 80% faster, 50% less memory, 0% loss of accuracy Llama finetuning
-
80% faster, 50% less memory, 0% accuracy loss Llama finetuning
This seems to just be a link to the Unsloth Github repo[0], which in turn is the free version of Unsloth Pro/Max[1]. Maybe the link should be changed?
[0]: https://github.com/unslothai/unsloth
- 80% faster, 50% less memory, 0% loss of accuracy Llama finetuning
uniteai
-
Can we discuss MLOps, Deployment, Optimizations, and Speed?
I recently went through the same with UniteAI, and had to swap ctransformers back out for llama.cpp
-
Best Local LLM Backend Server Library?
I maintain the uniteai project, and have implemented a custom backend for serving transformers-compatible LLMs. (That file's actually a great ultra-light-weight server if transformers satisfies your needs; one clean file).
-
Show HN: SeaGOAT – local, “AI-based” grep for semantic code search
UniteAI brings together speech recognition and document / code search. The major difference is your UI is your preferred text editor.
https://github.com/freckletonj/uniteai
-
Language Model UXes in 2027
In answer to the same question I built UniteAI https://github.com/freckletonj/uniteai
It's local first, and ties many different AIs into one text editor, any arbitrary text editor in fact.
It does speech recognition, which isn't useful for writing code, but is useful for generating natural language LLM prompts and comments.
It does CodeLlama (and any HuggingFace-based language model)
It does ChatGPT
It does Retrieval Augmented Gen, which is where you have a query that searches through eg PDFs, Youtube transcripts, code bases, HTML, local or online files, Arxiv papers, etc. It then surfaces passages relevant to your query, that you can then further use in conjunction with an LLM.
I don't know how mainstream LLM-powered software looks, but for devs, I love this format of tying in the best models as they're released into one central repo where they can all play off each others' strengths.
-
Can I get a pointer on Kate LSP Clients? I'm trying to add a brand new one.
I'm working on UniteAI, a project to tie different AI capabilities into the editor, and it has a clean LSP Server.
-
UniteAI, collab with AIs in your text editor by writing alongside each other
*TL;DR*: chat with AI, code with AI, speak to AI (voice-to-text + vice versa), have AI search huge corpora or websites for you, all via an interface of collaborating on a text doc together in the editor you use now.
*Motivation*
I find the last year of AI incredibly heartening. Researchers are still regularly releasing SoTA models in disparate domains. Meta is releasing powerful Llama under generous provisions (As is the UAE with Falcon?!). And the Open Source community has shown a tidal wave of interest and effort into building things out of these tools (112k repos on GH mentioning ML!).
Facing this deluge of valuable things that communities are shepherding into the world, I wanted to incorporate them into my workflows, which as a software engineer, means my text editor.
*UniteAI*
So I started *UniteAI* https://github.com/freckletonj/uniteai, an Apache-2.0 licensed tool.
Check out the screencasts: https://github.com/freckletonj/uniteai#some-core-features
This project:
* Ties in to *any editor* via Language Server Protocol. Like collaborating in G-Docs, you collab with whatever AI directly in the document, all of you writing alongside each other concurrently.
* Like Copilot / Cursor, it can write code/text right in your doc.
* It supports *any Locally runnable model* (Llama family, Falcon, Finetunes, the 21k available models on HF, etc.)
* It supports *OpenAI/ChatGPT* via API key.
* *Speech-to-Text*, useful for writing prompts to your LLM
* You can do *Semantic Search* (Retrieval Augmented Generation) on many sources: local files, Arxiv, youtube transcripts, Project Gutenberg books, any online HTML, basically if you give it a URI, it can probably use it.
* You can trigger features easily via [key combos](https://github.com/freckletonj/uniteai#keycombos).
* Written in Python, so, much more generic than writing a bespoke `some_specific_editor` plugin.
*Caveat*
Since it always comes up, *AI is not perfect*. AI is a tool to augment your time, not replace it. It hallucinates, it lies, it bullshits, it writes bad code, it gives dangerous advice.
But can still do many useful things, and for me it is a *huge force multiplier.*
*You need a Human In The Loop*, which is why it's nice to work together iteratively on a text document, per, this project. You keep it on track.
*Why is this interesting*
These tools play well when used together:
* *Code example:* I can Voice-to-Text a function comment then send that to an LLM to write the function.
* *Code example 2:* I can chit chat about project architecture plans, and strategies, and libraries I should consider.
* *Documentation example:* I can retrieve relevant sections of my city's building code with a natural language query, then send that to an LLM to expound upon.
* *Authorship example*: I can have my story arcs and character dossiers in some markdown file, and use that guidance to contextualize an AI as it works with me for writing a story.
* *Entertainment example*: I told my AI it was a Dungeon Master, then over breakfast with friends, used Voice-to-Text and Text-to-Wizened-Wizard-Voice, and played a hillarious game. I still had to drive all this via a text doc, and handy key combos.
*RFC*
Installation instructions are on the repo: https://github.com/freckletonj/uniteai#quickstart-installing...
This is still nascent, and I welcome all feedback, positive or critical.
We have a community linked on the repo which you're invited to join.
I'd love love to chat with people who like this idea, use it, want to see other features, want to contribute their effort, want to file bug reports, etc.
A big part of my motivation in this is to socialize with like-minds, and build something cool.
*Thanks for checking this out!*
- UniteAI: In an editor, self hosted llama, code llama, mic voice transcription, and ai-powered web/document search
- [ UniteAI ]: "your AIs in your editor". I've been bustin my butt, and feel like it's finally worth presenting to the world.
-
Show HN: Use Code Llama as Drop-In Replacement for Copilot Chat
[UniteAI](https://github.com/freckletonj/uniteai) I think fits the bill for you.
This is my project, where the goal is to Unite your AI-stack inside your editor (so, Speech-to-text, Local LLMs, Chat GPT, Retrieval Augmented Gen, etc).
It's built atop a Language Server, so, while no one has made an IntelliJ client yet, it's simple to. I'll help you do it if you make a GH Issue!
-
UniteAI: Your AI-Stack in your Editor
UniteAI (github)
What are some alternatives?
DeepSpeed - DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
chatcraft.org - Developer-oriented ChatGPT clone
llama.cpp - LLM inference in C/C++
continue - ⏩ Open-source VS Code and JetBrains extensions that enable you to easily create your own modular AI software development system
nanoChatGPT - nanogpt turned into a chat model
SeaGOAT - local-first semantic code search engine
gpt-fast - Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.
semantic-code-search - Search your codebase with natural language • CLI • No data leaves your computer
transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
gw2combat - A GW2 combat simulator using entity-component-system design
accelerate - 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support
evadb - Database system for AI-powered apps