java-snapshot-testing
trl
java-snapshot-testing | trl | |
---|---|---|
2 | 15 | |
119 | 15,305 | |
1.7% | 3.7% | |
4.1 | 9.8 | |
9 months ago | 2 days ago | |
Java | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
java-snapshot-testing
- FLaNK Stack 29 Jan 2024
-
📸 Snapshot Testing with Kotlin
In this PoC I will use origin-energy/java-snapshot-testing and as stated in "the testing framework loved by lazy productive devs" I use it whenever I find myself manually saving test expectations as text files 😅
trl
-
Long-Context GRPO
I'm waiting for https://github.com/huggingface/trl/pull/2810 to land. I think this should work with the existing unsloth setup without changes.
-
ORPO, DPO, and PPO: Optimizing Models for Human Preferences
Implementation: ORPO has been integrated into popular fine-tuning libraries like TRL, Axolotl, and LLaMA-Factory.
- FLaNK Stack 29 Jan 2024
-
OOM Error while using TRL for RLHF Fine-tuning
I am using TRL for RLHF fine-tuning the Llama-2-7B model and getting an OOM error (even with batch_size=1). If anyone used TRL for RLHF can please tell me what I am doing wrong? Code details can be found in the GitHub issue.
-
[D] Tokenizers Truncation during Fine-tuning with Large Texts
SFTtrainer from huggingface
-
New Open-source LLMs! 🤯 The Falcon has landed! 7B and 40B
For lora - PEFT seems to work. I don't have patience to wait 5 hours, but modifying this example seems to work. You don't even need to modify that much, as their model just as neo-x uses query_key_value name for self-attention.
-
[D] Using RLHF beyond preference tuning
They have examples of making GPT output more positive (code) by using a sentiment model as reward. There are other examples about reducing toxicity, summarization here: https://github.com/lvwerra/trl/tree/main/examples . Should be fairly simple to modify the sentiment example and try the calculator reward you mentioned above.
-
[R] 🤖🌟 Unlock the Power of Personal AI: Introducing ChatLLaMA, Your Custom Personal Assistant! 🚀💬
You can use this -> https://github.com/lvwerra/trl/blob/main/examples/sentiment/scripts/gpt-neox-20b_peft/merge_peft_adapter.py
-
[R] Stanford-Alpaca 7B model (an instruction tuned version of LLaMA) performs as well as text-davinci-003
Just the hh directly. From the results it seems like it might possibly be enough but I might also try instruction tuning then running the whole process from that base. I will also be running the reinforcement learning by using a Lora using this as an example https://github.com/lvwerra/trl/tree/main/examples/sentiment/scripts/gpt-neox-20b_peft
-
[R] A simple explanation of Reinforcement Learning from Human Feedback (RLHF)
This package is pretty simple to use! https://github.com/lvwerra/trl
What are some alternatives?
reor - Private & local AI personal knowledge management app for high entropy people.
llama-cookbook - Welcome to the Llama Cookbook! This is your go to guide for Building with Llama: Getting started with Inference, Fine-Tuning, RAG. We also show you how to solve end to end problems using Llama model family and using them on various provider services
LLMs-from-scratch - Implement a ChatGPT-like LLM in PyTorch from scratch, step by step
lm-human-preferences - Code for the paper Fine-Tuning Language Models from Human Preferences
Deep_Object_Pose - Deep Object Pose Estimation (DOPE) – ROS inference (CoRL 2018)
LLaMA-8bit-LoRA - Repository for Chat LLaMA - training a LoRA for the LLaMA (1 or 2) models on HuggingFace with 8-bit or 4-bit quantization. Research only.