nvc-gpt3-chat
finetune-gpt2xl
Our great sponsors
nvc-gpt3-chat | finetune-gpt2xl | |
---|---|---|
1 | 9 | |
11 | 421 | |
- | - | |
7.2 | 0.0 | |
about 2 years ago | 10 months ago | |
Python | Python | |
Creative Commons Zero v1.0 Universal | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
nvc-gpt3-chat
-
A Conversation with an GPT-3 Therapist
I have had some success with an approach based on the algorithmic "compassionate communication" model by the psychologist Marshall Rosenberg. https://github.com/renayo/nvc-gpt3-chat
finetune-gpt2xl
-
Fine-tuning?
git clone the finetuning repo https://github.com/Xirider/finetune-gpt2xl go into the finetuning repo, install the rest of the requirements, pip install -r requirements.txt
- Training text-generating models locally
-
Dataset For GPT Fine-Tuning
I would like to understand a little better how to organize texts for Fine-Tuning, especially for GPT Neo. I plan to use this repo procedure, where is the following notice,
-
How to share the finetuned model
In the code suggested in the video (and in the repo) the flag --fp16 is used. But reading the "DeepSpeed Integration" article it is said that,
- [D] I made a script that does all the work to deploy GPT-NEO on Windows 10. (Please Test)
-
[Project] Estimating fine-tuning cost
Finetuning GPT-NEO 2.7B on Wikitext (180mb) took me about 45 minutes on one preemptible V100 instance on google cloud. It cost 1.30$ per hour and therefore around 1 $. Here are the steps: https://github.com/Xirider/finetune-gpt2xl
-
[P] Guide: Finetune GPT2-XL (1.5 Billion Parameters, the biggest model) on a single 16 GB VRAM V100 Google Cloud instance with Huggingface Transformers using DeepSpeed
Here i explain the setup and commands to get it running: https://github.com/Xirider/finetune-gpt2xl
- Guide: Finetune GPT2-XL (1.5 Billion Parameters, the biggest model) on a single 16 GB VRAM V100 Google Cloud instance with Huggingface Transformers using DeepSpeed
What are some alternatives?
aitg - plug and play many transformers models for http api or command line!
detoxify - Trained models & code to predict toxic comments on all 3 Jigsaw Toxic Comment Challenges. Built using ⚡ Pytorch Lightning and 🤗 Transformers. For access to our API, please email us at [email protected].
Marshall-Rosenberg-NVC - Awesome Resources for learning Marshall Rosenberg's Nonviolent Communication
Extracting-Training-Data-from-Large-Langauge-Models - A re-implementation of the "Extracting Training Data from Large Language Models" paper by Carlini et al., 2020
SkPy - An unofficial Python library for interacting with the Skype HTTP API.
OpenCue - A render management system you can deploy for visual effects and animation productions.
ChatGPT-Mobile - A cross platform application to allow use of ChatGPT through mobile MMS applications such as iMessage and Android
wavy-api - API behind the website Wavy
Keyboard-Layout-Editor-for-Blender - Allows you to import keyboard layouts into blender and render them in 3d