gpt-2-output-dataset
metaseq
Our great sponsors
gpt-2-output-dataset | metaseq | |
---|---|---|
11 | 53 | |
1,882 | 6,386 | |
1.2% | 1.0% | |
2.9 | 6.2 | |
5 months ago | 10 days ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
gpt-2-output-dataset
-
Being accused for using ChatGPT in my assignment, what should I do ?
especially: "Our classifier is not fully reliable. In our evaluations on a “challenge set” of English texts, our classifier correctly identifies 26% of AI-written text (true positives) as “likely AI-written,” while incorrectly labeling human-written text as AI-written 9% of the time (false positives). Our classifier’s reliability typically improves as the length of the input text increases. Compared to our previously released classifier, this new classifier is significantly more reliable on text from more recent AI systems." Many other classifiers are similar, e.g.:
-
Have OpenAI made GPT-2 available for download? I mean (pre) trained model, not source code? How large is it in terms of MB of traffic? MB on disk?
Links search found: https://github.com/openai/gpt-2-output-dataset (dataset? I want GPT, Pre-trained model).
- GPTZero case study discovers it's only accurate on less than 50% of text
- [P] I launched “CatchGPT”, a supervised model trained with millions of text examples, to detect GPT created content
- Detect ChatGPT Generated Content
-
meet the villain:
Source: the literal source code and paper by the original creators of a detector that most of these knockoff detectors are based on.
-
Originality.ai is a HUGE scam.
OpenAI published a detector themselves that seems to be quite good. https://github.com/openai/gpt-2-output-dataset/tree/master/detector
-
[Hobby Scuffles] Week of December 19, 2022
Here is OpenAI's own detector, but it's not impossible to beat by just doing some fairly basic stuff with things like automatic paraphrasing.
- Meta announces a GPT3-size language model you can download
-
GPT 3 output Detection
To a certain extent, GPT-2 worked because of the smaller dataset of just 40GB. Even in that model, researchers running detection found accurate results only in the:
metaseq
-
Training great LLMs from ground zero in the wilderness as a startup
This is a super important issue that affects the pace and breadth of iteration of AI almost as much as the raw hardware improvements do. The blog is fun but somewhat shallow and not technical or very surprising if you’ve worked with clusters of GPUs in any capacity over the years. (I liked the perspective of a former googler, but I’m not sure why past colleagues would recommend Jax over pytorch for LLMs outside of Google.) I hope this newco eventually releases a more technical report about their training adventures, like the PDF file here: https://github.com/facebookresearch/metaseq/tree/main/projec...
- Chronicles of Opt Development
-
See the pitch memo that raised €105M for four-week-old startup Mistral
The number of people who can actually pre-train a true LLM is very small.
It remains a major feat with many tweaks and tricks. Case in point: the 114 pages of OPT175B logbook [1]
[1] https://github.com/facebookresearch/metaseq/blob/main/projec...
- Technologie: „Austro-ChatGPT“ – aber kein Geld zum Testen
- OPT (Open Pre-trained Transformers) is a family of NLP models trained on billions of tokens of text obtained from the internet
- Current state-of-the-art open source LLM
-
Elon Musk Buys Ten Thousand GPUs for Secretive AI Project
Reliability at scale: take a look at the OPT training log book for their 175B model run. It needed a lot of babysitting. In my experience, that scale of TPU training run requires a restart about once every 1-2 weeks—and they provide the middleware to monitor the health of the cluster and pick up on hardware failures.
-
Is AI Development more fun than Software Development?
I really appreciated this log of Facebook training a large language model of how troublesome AI development can be: https://github.com/facebookresearch/metaseq/tree/main/projects/OPT/chronicles
-
Visual ChatGPT
Stable Diffusion will run on any decent gaming GPU or a modern MacBook, meanwhile LLMs comparable to GPT-3/ChatGPT have had pretty insane memory requirements - e.g., <https://github.com/facebookresearch/metaseq/issues/146>
-
Ask HN: Is There On-Call in ML?
It seems so, check this log book from Meta: https://github.com/facebookresearch/metaseq/blob/main/projec...
What are some alternatives?
mesh-transformer-jax - Model parallel transformers in JAX and Haiku
stable-diffusion - A latent text-to-image diffusion model
oxen-release - Lightning fast data version control system for structured and unstructured machine learning datasets. We aim to make versioning datasets as easy as versioning code.
gpt-2 - Code for the paper "Language Models are Unsupervised Multitask Learners"
nlp-resume-parser - NLP-powered, GPT-3 enabled Resume Parser from PDF to JSON.
GLM-130B - GLM-130B: An Open Bilingual Pre-Trained Model (ICLR 2023)
manim - Animation engine for explanatory math videos
cupscale - Image Upscaling GUI based on ESRGAN
ChatGPT.el - ChatGPT in Emacs
YaLM-100B - Pretrained language model with 100B parameters
min-dalle - min(DALL·E) is a fast, minimal port of DALL·E Mini to PyTorch