test
gpt-neo
Our great sponsors
test | gpt-neo | |
---|---|---|
9 | 82 | |
933 | 6,158 | |
- | - | |
2.5 | 7.3 | |
11 months ago | about 2 years ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
test
- Measuring Multitask Language Understanding
-
Mixtral 7B MoE beats LLaMA2 70B in MMLU
Sources [1] MMLU Benchmark (Multi-task Language Understanding) | Papers With Code https://paperswithcode.com/sota/multi-task-language-understanding-on-mmlu [2] MMLU Dataset | Papers With Code https://paperswithcode.com/dataset/mmlu [3] hendrycks/test: Measuring Massive Multitask Language Understanding | ICLR 2021 - GitHub https://github.com/hendrycks/test [4] lukaemon/mmlu · Datasets at Hugging Face https://huggingface.co/datasets/lukaemon/mmlu [5] [2009.03300] Measuring Massive Multitask Language Understanding - arXiv https://arxiv.org/abs/2009.03300
-
BREAKING: Google just released its ChatGPT Killer
With a score of 90.0%, Gemini Ultra is the first model to outperform human experts on MMLU (massive multitask language understanding), which uses a combination of 57 subjects such as math, physics, history, law, medicine and ethics for testing both world knowledge and problem-solving abilities.
-
[Colab Notebook] Launch quantized MPT-30B-Chat on Vast.ai using text-generation-inference, integrated with ConversationChain
One method for comparison is the MMLU https://arxiv.org/abs/2009.03300.
- Partial Solution To AI Hallucinations
- Announcing GPT-4.
-
Show HN: Llama-dl – high-speed download of LLaMA, Facebook's 65B GPT model
Because there are many benchmarks that measure different things.
You need to look at the benchmark that reflects your specific interest.
So in this case ("I wasn't impressed that 30B didn't seem to know who Captain Picard was") the closest relevant benchmark they performed is MMLU (Massive Multitask Language Understanding"[1].
In the LLAMA paper they publish a figure of 63.4% for the 5-shot average setting without fine tuning on the 65B model, and 68.9% after fine tuning. This is significantly better that the original GPT-3 (43.9% under the same conditions) but as they note:
> "[it is] still far from the state-of-the-art, that is 77.4 for GPT code-davinci-002 on MMLU (numbers taken from Iyer et al. (2022))"
InstructGPT[2] (which OpenAI points at as most relevant ChatGPT publication) doesn't report MMLU performance.
[1] https://github.com/hendrycks/test
[2] https://arxiv.org/abs/2203.02155
-
DeepMind's newest language model, Chinchilla (70B parameters), significantly outperforms Gopher (280B) and GPT-3 (175B) on a large range of downstream evaluation tasks
Benchmark result is 67.6% which is 7.6% improvement from Gopher. MMLU is multiple choice Q&A over various subjects. Questions can be found linked in this github repo (see data).
gpt-neo
-
How Open is Generative AI? Part 2
By December 2020, EleutherAI had introduced The Pile, a comprehensive text dataset designed for training models. Subsequently, tech giants such as Microsoft, Meta, and Google used this dataset for training their models. In March 2021, they revealed GPT-Neo, an open-source model under Apache 2.0 license, which was unmatched in size at its launch. EleutherAI’s later projects include the release of GPT-J, a 6 billion parameter model, and GPT-NeoX, a 20 billion parameter model, unveiled in February 2022. Their work demonstrates the viability of high-quality open-source AI models.
-
Creating an open source chat bot like ChatGPT for my own dataset without GPU?
Yeah, if that is your requirement you should definitely ignore chatterbot, as its older and probably not what your teacher wants. I'm looking at the gpt-neo docs right now: https://github.com/EleutherAI/gpt-neo
-
Any real competitor to GPT-3 which is open source and downloadable?
3.) EleutherAI's GPT-Neo and GPT-NeoX: EleutherAI is an independent research organization that aims to promote open research in artificial intelligence. They have released GPT-Neo, an open-source language model based on the GPT architecture, and are developing GPT-NeoX, a highly-scalable GPT-like model. You can find more information on their GitHub repositories: GPT-Neo: https://github.com/EleutherAI/gpt-neo GPT-NeoX: https://github.com/EleutherAI/gpt-neox
-
⚡ Neural - AI Code Generation for Vim
This is one of the first comprehensive plugins that has been rewritten to support multiple AI backends such as OpenAI GPT3+ and other custom sources in the future such as ChatGPT, GPT-J, GPT-neo and more.
-
Looks like some Taliban fighters are getting burnt out working the 9-5 grind
GPT-Neo is newer than GPT-2 on the open source side of things. In my experience, it tends to give longer and more creative responses than GPT-2 but not on the level of GPT-3. I've not tried GPT-J or GPT-NeoX, but they're also open source and reportedly better than GPT-Neo (albeit less accessible).
- H3 - a new generative language models that outperforms GPT-Neo-2.7B with only *2* attention layers! In H3, the researchers replace attention with a new layer based on state space models (SSMs). With the right modifications, they find that it can outperform transformers.
- First Open Source Alternative to ChatGPT Has Arrived
-
Where is the line for AI and where does ChatGPT stand?
Finally, yes-- it is trained via masked language modeling (text prediction). The approach has been fairly standard for years- the big difference with the GPT* models is the number of paramaters and volume of text-- we still haven't reached a ceiling with LLM parameters- they appear to keep improving with size. This training allows the model to learn a strong representation of language. Their training approach is published and open-source GPT* versions have already been made and released (https://github.com/EleutherAI/gpt-neo). However, the models are huge and can't be run locally for hobbyists. This gets at larger issues in democratization of ML.
- Using the GPT-3 AI Writer inside Obsidian(This is COOL)
-
Teaser trailer for "The Diary of Sisyphus" (2023), the world's first feature film written by an artificial intelligence (GPT-NEO) and produced Briefcase Films, my indie film studio based in Northern Italy
- GPT-Neo 2.7B, released Mar/2021, and unmaintained/unsupported as of Aug/2021? or;
What are some alternatives?
mmfewshot - OpenMMLab FewShot Learning Toolbox and Benchmark
gpt-neox - An implementation of model parallel autoregressive transformers on GPUs, based on the DeepSpeed library.
RAD - RAD Expansion Unit for C64/C128
haystack - :mag: LLM orchestration framework to build customizable, production-ready LLM applications. Connect components (models, vector DBs, file converters) to pipelines or agents that can interact with your data. With advanced retrieval methods, it's best suited for building RAG, question answering, semantic search or conversational agent chatbots.
ut - C++20 μ(micro)/Unit Testing Framework
openchat - OpenChat: Easy to use opensource chatting framework via neural networks
elm-test-rs - Fast and portable executable to run your Elm tests
tensorflow - An Open Source Machine Learning Framework for Everyone
egghead - discord bot for ai stuff
mesh-transformer-jax - Model parallel transformers in JAX and Haiku
llama-int8 - Quantized inference code for LLaMA models
transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.