paper2gui
lora
paper2gui | lora | |
---|---|---|
1 | 83 | |
9,575 | 6,642 | |
- | - | |
0.0 | 0.0 | |
10 months ago | about 2 months ago | |
Jupyter Notebook | Jupyter Notebook | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
paper2gui
-
I made the anime super-resolution software RealESGAN-GUI in pure memory mode
Download: https://github.com/Baiyuetribe/paper2gui
lora
-
You can now train a 70B language model at home
Diffusion unet has an "extended" version nowadays that applies to the resnet part as well as the cross-attention: https://github.com/cloneofsimo/lora
-
How it feels right now
Absolutely. But that doesn't matter because you only have to train it at scale, once. There are papers released already that show it's possible to update weights in small sections. You won't have to wait for the next monolithic LLM to drop to get up to date information. It will start to learn in bits and pieces.
-
LoRA tuning in julia
No, it's a deep learning thing
-
What does Lora mean?
Low Rank Adaptation of Large Language Models.
-
[D] An ELI5 explanation for LoRA - Low-Rank Adaptation.
Recently, I have seen the LoRA technique (Low-Rank Adaptation of Large Language Models) as a popular method for fine-tuning LLMs and other models.
-
Combining LoRA, Retro, and Large Language Models for Efficient Knowledge Retrieval and Retention
Enter LoRA, a method proposed for adapting pre-trained models to specific tasks[2]. By freezing pre-trained model weights and injecting trainable rank decomposition matrices into the transformer architecture, LoRA can reduce the number of trainable parameters and the GPU memory requirement, making the adaptation of LLMs for downstream tasks more feasible.
-
100K Context Windows
Open-source LLM projects have largely solved this using Low-Rank Adaptation of Large Language Models (LoRA): https://arxiv.org/abs/2106.09685
Apparently an RTX 4090 running overnight is sufficient to produce a fine-tuned model that can spit out new Harry Potter stories, or whatever...
-
President Biden meets with AI CEOs at the White House amid ethical criticism
Alpaca was trained for $600 ($100 for the smaller model) and offers outputs competitive with ChatGTP. https://arxiv.org/abs/2106.09685
- LoRA: Low-Rank Adaptation of Large Language Models
- LORA: Low-Rank Adaptation of Large Language Models
What are some alternatives?
flowframes - Flowframes Windows GUI for video interpolation using DAIN (NCNN) or RIFE (CUDA/NCNN)
stable-diffusion-webui - Stable Diffusion web UI
sd-tagtool - A stable diffusion training dataset tag editor.
LyCORIS - Lora beYond Conventional methods, Other Rank adaptation Implementations for Stable diffusion.
sd_dreambooth_extension
kohya-trainer - Adapted from https://note.com/kohya_ss/n/nbf7ce8d80f29 for easier cloning
ControlNet - Let us control diffusion models!
sd-webui-additional-networks
diffusers - 🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
stablediffusion - High-Resolution Image Synthesis with Latent Diffusion Models
peft - 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
ComfyUI - The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface.