Awesome-LLM
alpaca-lora
Awesome-LLM | alpaca-lora | |
---|---|---|
10 | 107 | |
14,654 | 18,238 | |
- | - | |
8.6 | 3.6 | |
9 days ago | 3 months ago | |
Jupyter Notebook | ||
Creative Commons Zero v1.0 Universal | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Awesome-LLM
-
XGen-7B, a new 7B foundational model trained on up to 8K length for 1.5T tokens
Here are some high level answers:
"7B" refers to the number of parameters or weights for a model. For a specific model, the versions with more parameters take more compute power to train and perform better.
A foundational model is the part of a ML model that is "pretrained" on a massive data set (and usually is the bulk of the compute cost). This is usually considered the "raw" model after which it is fine-tuned for specific tasks (turned into a chatbot).
"8K length" refers to the Context Window length (in tokens). This is basically an LLM's short term memory - you can think of it as its attention span and what it can generate reasonable output for.
"1.5T tokens" refers to the size of the corpus of the training set.
In general Wikipedia (or I suppose ChatGPT 4/Bing Chat with Web Browsing) is a decent enough place to start reading/asking basic questions. I'd recommend starting here: https://en.wikipedia.org/wiki/Large_language_model and finding the related concepts.
For those going deeper, there are lot of general resources lists like https://github.com/Hannibal046/Awesome-LLM or https://github.com/Mooler0410/LLMsPracticalGuide or one I like, https://sebastianraschka.com/blog/2023/llm-reading-list.html (there are a bajillion of these and you'll find more once you get a grasp on the terms you want to surf for). Almost everything is published on arXiv, and most is fairly readable even as a layman.
For non-ML programmers looking to get up to speed, I feel like Karpathy's Zero to Hero/nanoGPT or Jay Mody's picoGPT https://jaykmody.com/blog/gpt-from-scratch/ are alternative/maybe a better way to understand the basic concepts on a practical level.
- Couple of questions about a.i that can be run locally
- How to dive deeper into LLMs?
- [Hiring] Developer to build AI-powered chatbots with open source LLMs
-
Creating a Wiki for all things Local LLM. What do you want to know?
Check out this repo, there should be some useful things worth noting https://github.com/Hannibal046/Awesome-LLM
- Large Language Model (LLM) Resources
- Curated list for LLMs: papers, training frameworks, tools to deploy, public APIs
-
Performance of GPT-4 vs PaLM 2
First this is a pretty good starting point as a resource for learning about and finding open source models and the overall public history of progress of LLMs.
-
FreedomGPT: AI with no censorship
This seems fishy as fuck. First red flag is a fishy installer instead of any huggingface link for the model. Upon further search I found this: https://desuarchive.org/g/thread/92686632/#92692092 There are posts in its own sub, r slash freedomgpt, raising concerns, and many new accounts with low karma replying to them(I don't think I can link other subs here, check them yourself), 100% some botting/astroturfing going on. Not touching this. Even in the best case scenario that this is legit with no funny business, this is supposed to be based on llama, which is substantially different tiny model(hence why it can be run on your computer at all). This is no Chatgpt equivalent eitherway. I would recommend getting something more reputable from github if you are interested in running LLMs yourself.
-
Ask HN: Foundational Papers in AI
https://github.com/Hannibal046/Awesome-LLM has a curated list of LLM specific resources.
Not the creator, just happened upon it when researching LLMs today.
alpaca-lora
-
How to deal with loss for SFT for CausalLM
Here is a example: https://github.com/tloen/alpaca-lora/blob/main/finetune.py
-
How to Finetune Llama 2: A Beginner's Guide
In this blog post, I want to make it as simple as possible to fine-tune the LLaMA 2 - 7B model, using as little code as possible. We will be using the Alpaca Lora Training script, which automates the process of fine-tuning the model and for GPU we will be using Beam.
-
Fine-tuning LLMs with LoRA: A Gentle Introduction
Implement the code in Llama LoRA repo in a script we can run locally
-
Newbie here - trying to install a Alpaca Lora and hitting an error
Hi all - relatively new to GitHub / programming in general, and I wanted to try to set up Alpaca Lora locally. Following the guide here: https://github.com/tloen/alpaca-lora
-
A simple repo for fine-tuning LLMs with both GPTQ and bitsandbytes quantization. Also supports ExLlama for inference for the best speed.
Follow up the popular work of u/tloen alpaca-lora, I wrapped the setup of alpaca_lora_4bit to add support for GPTQ training in form of installable pip packages. You can perform training and inference with multiple quantizations method to compare the results.
- FLaNK Stack Weekly for 20 June 2023
-
Converting to GGML?
If instead you want to apply a LoRa to a pytorch model, a lot of people use this script to apply to LoRa to the 16 bit model and then quantize it with a GPTQ program afterwards https://github.com/tloen/alpaca-lora/blob/main/export_hf_checkpoint.py
-
Simple LLM Watermarking - Open Lllama 3b LORA
There are a few papers on watermarking LLM output, but from what I have seen they all use complex methods of detection to allow the watermark to go unseen by the end user, only to be detected by algorithm. I believe that a more overt system of watermarking might also be beneficial. One simple method that I have tried is character substitution. For this model, I LORA finetuned openlm-research/open_llama_3b on the alpaca_data_cleaned_archive.json dataset from https://github.com/tloen/alpaca-lora/ modified by replacing all instances of the "." character in the outputs with a "ι" The results are pretty good, with the correct the correct substitutions being generated by the model in most cases. It doesn't always work, but this was only a LORA training and for two epochs of 400 steps each, and 100% substitution isn't really required.
-
text-generation-webui's "Train Only After" option
I am kind of new to finetuning LLM's and am not able to understand what this option exactly refers to. I guess it has the same meaning as the "train_on_inputs" parameter of alpacalora though.
-
Learning sources on working with local LLMs
Read the paper and also: https://github.com/tloen/alpaca-lora
What are some alternatives?
langchain - ⚡ Building applications with LLMs through composability ⚡ [Moved to: https://github.com/langchain-ai/langchain]
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
FreedomGPT - This codebase is for a React and Electron-based app that executes the FreedomGPT LLM locally (offline and private) on Mac and Windows using a chat-based interface
qlora - QLoRA: Efficient Finetuning of Quantized LLMs
LLMZoo - ⚡LLM Zoo is a project that provides data, models, and evaluation benchmark for large language models.⚡
llama.cpp - LLM inference in C/C++
LoRA - Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
gpt4all - gpt4all: run open-source LLMs anywhere
dalai - The simplest way to run LLaMA on your local machine
llama - Inference code for Llama models
langchain - 🦜🔗 Build context-aware reasoning applications
ggml - Tensor library for machine learning