get-beam
alpaca-lora
get-beam | alpaca-lora | |
---|---|---|
9 | 107 | |
102 | 18,912 | |
- | 0.1% | |
7.3 | 0.0 | |
12 months ago | 11 months ago | |
Shell | Jupyter Notebook | |
- | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
get-beam
-
Show HN: Beta9 – Open-source, serverless GPU container runtime
- A custom runc container runtime
You can run Beta9 locally, or using a managed service, Beam Cloud (https://beam.cloud).
See it in action: https://github.com/beam-cloud/beta9
-
Ask HN: Where to find an env with GPU for model training?
You should checkout https://beam.cloud (I'm the founder), it'll give you access to plenty of cloud GPU resources for training or inference.
Right now it's pretty hard to get GPU quota on AWS/GCP, so hopefully this is useful for you.
-
Cloudflare launches new AI tools to help customers deploy and run models
Cloudflare AI and Replicate are great for running off-the-shelf models, but anything custom is going to incur a 10+ minute cold start.
For running custom fine-tuned models on serverless, you could look into https://beam.cloud which is optimized for serving custom models with extremely fast cold start (I'm a little biased since I work there, but the numbers don't lie)
-
Workers AI: serverless GPU-powered inference on Cloudflare’s global network
Serverless only works if the cold boot is fast. For context, my company runs a serverless cloud GPU product called https://beam.cloud, which we've optimized for fast cold start. We see Whisper in production cold start in under 10s (across model sizes). A lot of our users are running semi-real time STT, and this seems to be working well for them.
-
Ultrafast serverless GPU runtime for custom SD models
I’m Eli, and my co-founder and I built Beam to run workloads on serverless cloud GPUs with hot reloading, autoscaling, and (of course) fast cold start. You don’t need Docker or AWS to use it, and everyone who signs up gets 10 hours of free GPU credit to try it out.
-
[D] We built Beam: An ultrafast serverless GPU runtime
Github with example apps and tutorials: https://github.com/slai-labs/get-beam/tree/main/examples
-
How to Finetune Llama 2: A Beginner's Guide
In this blog post, I want to make it as simple as possible to fine-tune the LLaMA 2 - 7B model, using as little code as possible. We will be using the Alpaca Lora Training script, which automates the process of fine-tuning the model and for GPU we will be using Beam.
- Run CodeLlama on a Serverless GPU
alpaca-lora
-
How to deal with loss for SFT for CausalLM
Here is a example: https://github.com/tloen/alpaca-lora/blob/main/finetune.py
-
How to Finetune Llama 2: A Beginner's Guide
In this blog post, I want to make it as simple as possible to fine-tune the LLaMA 2 - 7B model, using as little code as possible. We will be using the Alpaca Lora Training script, which automates the process of fine-tuning the model and for GPU we will be using Beam.
-
Fine-tuning LLMs with LoRA: A Gentle Introduction
Implement the code in Llama LoRA repo in a script we can run locally
-
Newbie here - trying to install a Alpaca Lora and hitting an error
Hi all - relatively new to GitHub / programming in general, and I wanted to try to set up Alpaca Lora locally. Following the guide here: https://github.com/tloen/alpaca-lora
-
A simple repo for fine-tuning LLMs with both GPTQ and bitsandbytes quantization. Also supports ExLlama for inference for the best speed.
Follow up the popular work of u/tloen alpaca-lora, I wrapped the setup of alpaca_lora_4bit to add support for GPTQ training in form of installable pip packages. You can perform training and inference with multiple quantizations method to compare the results.
- FLaNK Stack Weekly for 20 June 2023
-
Converting to GGML?
If instead you want to apply a LoRa to a pytorch model, a lot of people use this script to apply to LoRa to the 16 bit model and then quantize it with a GPTQ program afterwards https://github.com/tloen/alpaca-lora/blob/main/export_hf_checkpoint.py
-
Simple LLM Watermarking - Open Lllama 3b LORA
There are a few papers on watermarking LLM output, but from what I have seen they all use complex methods of detection to allow the watermark to go unseen by the end user, only to be detected by algorithm. I believe that a more overt system of watermarking might also be beneficial. One simple method that I have tried is character substitution. For this model, I LORA finetuned openlm-research/open_llama_3b on the alpaca_data_cleaned_archive.json dataset from https://github.com/tloen/alpaca-lora/ modified by replacing all instances of the "." character in the outputs with a "ι" The results are pretty good, with the correct the correct substitutions being generated by the model in most cases. It doesn't always work, but this was only a LORA training and for two epochs of 400 steps each, and 100% substitution isn't really required.
-
text-generation-webui's "Train Only After" option
I am kind of new to finetuning LLM's and am not able to understand what this option exactly refers to. I guess it has the same meaning as the "train_on_inputs" parameter of alpacalora though.
-
Learning sources on working with local LLMs
Read the paper and also: https://github.com/tloen/alpaca-lora
What are some alternatives?
beta9 - Scalable Infrastructure for Running Your AI Workloads at Scale
text-generation-webui - A Gradio web UI for Large Language Models with support for multiple inference backends.
discourse-ai
dalai - The simplest way to run LLaMA on your local machine
whisper-turbo - Cross-Platform, GPU Accelerated Whisper 🏎️
RWKV-LM - RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable). We are at RWKV-7 "Goose". So it's combining the best of RNN and transformer - great performance, linear time, constant space (no kv-cache), fast training, infinite ctx_len, and free sentence embedding.