xTuring
OpenChatKit
xTuring | OpenChatKit | |
---|---|---|
31 | 23 | |
2,525 | 9,001 | |
0.9% | 0.1% | |
8.4 | 7.1 | |
about 1 month ago | about 1 month ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
xTuring
-
I'm developing an open-source AI tool called xTuring, enabling anyone to construct a Language Model with just 5 lines of code. I'd love to hear your thoughts!
Explore the project on GitHub here.
-
LLaMA 2 fine-tuning made easier and faster
If you're curious, I encourage you to: - Dive deeper with the LLaMA 2 tutorial here. - Explore the project on GitHub here. - Connect with our community on Discord here.
-
RAG vs. Fine-Tuning
If you want best performance, you need to do both RAG and fine-tuning very well. There are plenty of resources on doing fine-tuning thought. I'm one of the contributors to https://github.com/stochasticai/xturing project focused on fine-tuning LLMs. You can find help in the discord channel listed on the GitHub.
- Build, customize and control your own personal LLMs via xTuring OSS
-
Finetuning LLaMA 2 (the base models) ?
What tools do you use and achieved great results ? … For me i have tried xturing and SFTTrainer and they got me a semi okay results.
-
Finetuning using Google Colab (Free Tier)
Code: https://github.com/stochasticai/xTuring/blob/main/examples/llama/llama_lora_int8.py Colab: https://colab.research.google.com/drive/1SQUXq1AMZPSLD4mk3A3swUIc6Y2dclme?usp=sharing
-
I would like to try my hand at finetuning some models. What is the best way to start? I have some questions that I'd appreciate your help on.
We are a group of researchers out of Harvard working on open-source library called xTuring, focused on fine-tuning LLMs: https://github.com/stochasticai/xturing.
-
Fine tuning on my tweets
Fine tuning I was thinking about using this (low GPU memory footprint): https://github.com/stochasticai/xturing/blob/main/examples/int4_finetuning/README.md
-
Colab for finetuning llama models in 4-bit?
I can't speak for QLORA, as I haven't had a chance to get an implementation working, but I've had success with StochasticAI's Xturing. It's by far the most streamlined method of finetuning I've come across, and they offer int8 and int4 fintuning (but only for llama-7B).
- Just wanna say this.
OpenChatKit
- OpenChatKit - OSS Framework for building chatbots
-
How should I get an in-depth mathematical understanding of generative AI?
ChatGPT isn't open sourced so we don't know what the actual implementation is. I think you can read Open Assistant's source code for application design. If that is too much, try Open Chat Toolkit's source code for developer tools . If you need very bare implementation, you should go for lucidrains/PaLM-rlhf-pytorch.
- OpenChatKit
- OpenChatKit: Open-source kit for setting up a local, libre, LLM chatbot
-
I created a locally-run ai assistant for UE5’s documentation
For a locally run open source option, I'd recommend taking a look at OpenChatKit. It's built on top of a couple different open source LLMs that have been fine-tuned for use as chatbots. I've only messed around with the online demo a little bit, but from what I've read it is supposed to run on a laptop and be almost as good as ChatGPT 3.5.
-
[D] Are there any MIT licenced (or similar) open-sourced instruction-tuned LLMs available?
OpenChatKit https://github.com/togethercomputer/OpenChatKit
-
[D] Is there currently anything comparable to the OpenAI API?
Togethercomputer released openchatkit a few weeks ago. Not tested it but looks promising https://github.com/togethercomputer/OpenChatKit
What are some alternatives?
quivr - Your GenAI Second Brain 🧠 A personal productivity assistant (RAG) ⚡️🤖 Chat with your docs (PDF, CSV, ...) & apps using Langchain, GPT 3.5 / 4 turbo, Private, Anthropic, VertexAI, Ollama, LLMs, Groq that you can share with users ! Local & Private alternative to OpenAI GPTs & ChatGPT powered by retrieval-augmented generation.
alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM
axolotl - Go ahead and axolotl questions
roomGPT - Upload a photo of your room to generate your dream room with AI.
FinGPT - FinGPT: Open-Source Financial Large Language Models! Revolutionize 🔥 We release the trained model on HuggingFace.
Open-Assistant - OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
awesome-totally-open-chatgpt - A list of totally open alternatives to ChatGPT
wik - wik is use to get information about anything on the shell using Wikipedia.
Meshtasticator - Discrete-event and interactive simulator for Meshtastic.
simple-llm-finetuner - Simple UI for LLM Model Finetuning
Zicklein - Finetuning instruct-LLaMA on german datasets.
minChatGPT - A minimum example of aligning language models with RLHF similar to ChatGPT