kener
trl
kener | trl | |
---|---|---|
4 | 13 | |
2,109 | 8,467 | |
- | 4.1% | |
9.3 | 9.6 | |
20 days ago | 8 days ago | |
JavaScript | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
kener
-
6 Best Open Source Status Page Alternatives for 2024
1. Kener
- FLaNK Stack 29 Jan 2024
-
Show HN: Built a self hosted clean status page and batteries
I think the incident.svelte file could use some love, is it best practice to put part of the phrase somewhere else? Doesn't it increase the cognitive stress? Like there is somewhere a phrase, but part of the text is being calculated https://github.com/rajnandan1/kener/blob/74ea57d6bbf6ac4dd3e...
Isn't it easier to understand what is going on just by calculating the condition on the top and put the text on the markup based on that condition?
I feel like there are few places where in order to don't duplicate part of the text it's being made extremely difficult what the text is going to be by putting it far away
trl
- FLaNK Stack 29 Jan 2024
-
OOM Error while using TRL for RLHF Fine-tuning
I am using TRL for RLHF fine-tuning the Llama-2-7B model and getting an OOM error (even with batch_size=1). If anyone used TRL for RLHF can please tell me what I am doing wrong? Code details can be found in the GitHub issue.
-
[D] Tokenizers Truncation during Fine-tuning with Large Texts
SFTtrainer from huggingface
-
New Open-source LLMs! 🤯 The Falcon has landed! 7B and 40B
For lora - PEFT seems to work. I don't have patience to wait 5 hours, but modifying this example seems to work. You don't even need to modify that much, as their model just as neo-x uses query_key_value name for self-attention.
-
[D] Using RLHF beyond preference tuning
They have examples of making GPT output more positive (code) by using a sentiment model as reward. There are other examples about reducing toxicity, summarization here: https://github.com/lvwerra/trl/tree/main/examples . Should be fairly simple to modify the sentiment example and try the calculator reward you mentioned above.
-
[R] 🤖🌟 Unlock the Power of Personal AI: Introducing ChatLLaMA, Your Custom Personal Assistant! 🚀💬
You can use this -> https://github.com/lvwerra/trl/blob/main/examples/sentiment/scripts/gpt-neox-20b_peft/merge_peft_adapter.py
-
[R] Stanford-Alpaca 7B model (an instruction tuned version of LLaMA) performs as well as text-davinci-003
Just the hh directly. From the results it seems like it might possibly be enough but I might also try instruction tuning then running the whole process from that base. I will also be running the reinforcement learning by using a Lora using this as an example https://github.com/lvwerra/trl/tree/main/examples/sentiment/scripts/gpt-neox-20b_peft
-
[R] A simple explanation of Reinforcement Learning from Human Feedback (RLHF)
This package is pretty simple to use! https://github.com/lvwerra/trl
- Transformer Reinforcement Learning
- trl: Train transformer language models with reinforcement learning
What are some alternatives?
uptime-kuma - A fancy self-hosted monitoring tool
lm-human-preferences - Code for the paper Fine-Tuning Language Models from Human Preferences
upptime - ⬆️ GitHub Actions uptime monitor & status page by @AnandChowdhary
alpaca-lora - Instruct-tune LLaMA on consumer hardware
bams - BigBlueButton & AdobeConnect Monitoring Software
trlx - A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)
LLaMA-8bit-LoRA - Repository for Chat LLaMA - training a LoRA for the LLaMA (1 or 2) models on HuggingFace with 8-bit or 4-bit quantization. Research only.
sparsegpt-for-LLaMA - Code for the paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot" with LLaMA implementation.
llama-recipes - Scripts for fine-tuning Meta Llama3 with composable FSDP & PEFT methods to cover single/multi-node GPUs. Supports default & custom datasets for applications such as summarization and Q&A. Supporting a number of candid inference solutions such as HF TGI, VLLM for local or cloud deployment. Demo apps to showcase Meta Llama3 for WhatsApp & Messenger.
Deep_Object_Pose - Deep Object Pose Estimation (DOPE) – ROS inference (CoRL 2018)
java-snapshot-testing - Facebook style snapshot testing for JAVA Tests
pong-wars