Code-LMs
LoRA
Code-LMs | LoRA | |
---|---|---|
4 | 35 | |
1,721 | 9,534 | |
- | 5.1% | |
1.6 | 4.7 | |
about 1 year ago | about 2 months ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Code-LMs
- PolyCoder LLM integration
-
CodeboldAI
Could we add CodeT5 and/or PolyCoder to the supported models?
-
[R] PolyCoder 2.7BN LLM - open source model and parameters {CMU}
Code for https://arxiv.org/abs/2202.13169 found: https://github.com/VHellendoorn/Code-LMs
-
Hey Folks, Here is really a cool research update from CMU researchers where they open-sourced ‘PolyCoder’, its machine learning-based code generator with 2.7B parameters
Github: https://github.com/vhellendoorn/code-lms
LoRA
- A look at Apple's technical approach to AI including core model performance etc.
-
DECT NR+: A technical dive into non-cellular 5G
This seems to be an order of magnitude better than LoRa (https://lora-alliance.org/ not https://arxiv.org/abs/2106.09685). LoRa doesn't have all the features this one does like OFDM, TDM, FDM, and HARQ. I didn't know there's spectrum dedicated for DECT use.
-
Training LLMs Taking Too Much Time? Technique you need to know to train it faster
So to solve this, we tried researching into some optimization techniques and we found LoRA, Which stands for Low-Rank Adaptation of Large Language Models.
-
OpenAI employee: GPT-4.5 rumor was a hallucination
> Anyone have any ideas / knowledge on how they deploy little incremental fixes to exploited jailbreaks, etc?
LoRa[1] would be my guess.
For detailed explanation I recommend the paper. But the short explanation is that it is a trick which lets you train a smaller, lower dimensional model which when you add to the original model it gets you the result you want.
1: https://arxiv.org/abs/2106.09685
-
Can a LoRa be used on models other than Stable Diffusion?
LoRA was initially developed for large language models, https://arxiv.org/abs/2106.09685 (2021). It was later that people discovered that it worked REALLY well for diffusion models.
-
StyleTTS2 – open-source Eleven Labs quality Text To Speech
Curious if we'll see a Civitai-style LoRA[1] marketplace for text-to-speech models.
1 = https://github.com/microsoft/LoRA
-
Andreessen Horowitz Invests in Civitai, Which Profits from Nonconsensual AI Porn
From https://arxiv.org/abs/2106.09685:
> LoRA: Low-Rank Adaptation of Large Language Models
> An important paradigm of natural language processing consists of large-scale pre-training on general domain data and adaptation to particular tasks or domains. As we pre-train larger models, full fine-tuning, which retrains all model parameters, becomes less feasible. Using GPT-3 175B as an example -- deploying independent instances of fine-tuned models, each with 175B parameters, is prohibitively expensive. We propose Low-Rank Adaptation, or LoRA, which freezes the pre-trained model weights and injects trainable rank decomposition matrices into each layer of the Transformer architecture, greatly reducing the number of trainable parameters for downstream tasks. Compared to GPT-3 175B fine-tuned with Adam, LoRA can reduce the number of trainable parameters by 10,000 times and the GPU memory requirement by 3 times. LoRA performs on-par or better than fine-tuning in model quality on RoBERTa, DeBERTa, GPT-2, and GPT-3, despite having fewer trainable parameters, a higher training throughput, and, unlike adapters, no additional inference latency.
-
Is supervised learning dead for computer vision?
Yes, your understanding is correct. However, instead of adding a head on top of the network, most fine-tuning is currently done with LoRA (https://github.com/microsoft/LoRA). This introduces low-rank matrices between different layers of your models, those are then trained using your training data while the rest of the models' weights are frozen.
-
Run LLMs at home, BitTorrent‑style
Somewhat yes. See "LoRA": https://arxiv.org/abs/2106.09685
They're not composable in the sense that you can take these adaptation layers and arbitrarily combine them, but training different models while sharing a common base of weights is a solved problem.
-
New LoRa RF distance record: 1336 km / 830 mi
With all the naive AI zealotry on HN can you really fault me?
They're referring to this:
https://arxiv.org/abs/2106.09685
What are some alternatives?
transfer-learning-conv-ai - 🦄 State-of-the-Art Conversational AI with Transfer Learning
LyCORIS - Lora beYond Conventional methods, Other Rank adaptation Implementations for Stable diffusion.
CodeT5 - Home of CodeT5: Open Code LLMs for Code Understanding and Generation
ComfyUI - The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface.
ControlNet - Let us control diffusion models!
peft - 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
alpaca-lora - Instruct-tune LLaMA on consumer hardware
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
LLaMA-Adapter - [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters
sd-webui-additional-networks
LLaMA-Adapter - Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters [Moved to: https://github.com/OpenGVLab/LLaMA-Adapter]
gpt4all - gpt4all: run open-source LLMs anywhere