gpt-2
gpt-2
Our great sponsors
gpt-2 | gpt-2 | |
---|---|---|
63 | 3 | |
21,111 | 1,043 | |
1.9% | - | |
2.5 | 0.0 | |
17 days ago | over 1 year ago | |
Python | Python | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
gpt-2
- Sam Altman is still trying to return as OpenAI CEO
- Build Personal ChatGPT Using Your Data
-
Are the recent advancements in AI technology primarily driven by recent discoveries or the progress in hardware capabilities and the abundance of available data?
"Our model, called GPT-2 (a successor to GPT), was trained simply to predict the next word in 40GB of Internet text. Due to our concerns about malicious applications of the technology, we are not releasing the trained model. As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper. "
-
BING IS NOW THE DEFAULT SEARCH FOR CHATGPT
They did release GPT-2 under the MIT License.
-
Don Knuth Plays with ChatGPT
Did you arrive at this certainty through reading something other than what OpenAI has published? The document [0] that describes the training data for GPT-2 makes this assertion hilarious to me.
[0]: https://github.com/openai/gpt-2/blob/master/model_card.md#da...
- Was frustriert euch an der Nutzung oder der Diskussion um KI?
- The AI
-
Help with pet project to learn - Running ChatGPT-2 at home
I made a clone of https://github.com/openai/gpt-2 on my local laptop
- По поводу опасности ИИ и предложений остановить разработки на 6 месяцев.
-
Elon Musk, Y Bengio, Andrew Yang etc called for a temporary pause on training systems exceeding GPT-4
Elon's 100M put this in the arena. https://github.com/openai/gpt-2
gpt-2
-
data format to fine tune gpt-2 for code generation
I'm following this https://github.com/nshepperd/gpt-2 repo to fine tune the gpt-2 355M model, i've collected (comment,code) pairs from github into a text file where data have the following format :
- Trained GPT-2 to write poems
-
What is r/SubSimulatorGPT?
I then put all the submissions and comments in a txt file in an order mimicking reddit’s “sort by top”, and fine-tuned for each subreddit using GPT-345M, specifically nsheppard's GPT implementation. This tutorial written by u/gwern provided very helpful guidance as well.
What are some alternatives?
dalle-mini - DALL·E Mini - Generate images from a text prompt
minGPT - A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training
Real-Time-Voice-Cloning - Clone a voice in 5 seconds to generate arbitrary speech in real-time
gpt-neo - An implementation of model parallel GPT-2 and GPT-3-style models using the mesh-tensorflow library.
sentencepiece - Unsupervised text tokenizer for Neural Network-based text generation.
jukebox - Code for the paper "Jukebox: A Generative Model for Music"
mesh-transformer-jax - Model parallel transformers in JAX and Haiku
gpt-neox - An implementation of model parallel autoregressive transformers on GPUs, based on the DeepSpeed library.
stylegan2-pytorch - Simplest working implementation of Stylegan2, state of the art generative adversarial network, in Pytorch. Enabling everyone to experience disentanglement
dalle-2-preview
tensorrtx - Implementation of popular deep learning networks with TensorRT network definition API
gpt-2-training - Training GPT-2 on a Russian language corpus