gpt-neox
transformers
Our great sponsors
gpt-neox | transformers | |
---|---|---|
52 | 175 | |
6,569 | 125,021 | |
2.2% | 3.1% | |
8.9 | 10.0 | |
4 days ago | 3 days ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
gpt-neox
- FLaNK Stack 26 February 2024
- GPT-Neox
- GPT-NeoX
-
Best open source LLM model for commercial use
Gpt neox 20B can be used commerically.
- Do not register domains with the word "gpt" in it!
-
Read this post if you have general questions
GPT-Neo: GPT-Neo is a free and open-source language model developed by EleutherAI. It is a powerful model that can be used for a variety of tasks, including text generation, and question-answering. here is the GitHub
-
What's the current state of actually free and open source LLMs?
Doesn't gpt-neox 20b require like 40gb+ of VRAM? From their github repo the slim weights are 39GB and I think one of the devs has previously mentioned aiming for 48GB for inference.
-
Any real competitor to GPT-3 which is open source and downloadable?
3.) EleutherAI's GPT-Neo and GPT-NeoX: EleutherAI is an independent research organization that aims to promote open research in artificial intelligence. They have released GPT-Neo, an open-source language model based on the GPT architecture, and are developing GPT-NeoX, a highly-scalable GPT-like model. You can find more information on their GitHub repositories: GPT-Neo: https://github.com/EleutherAI/gpt-neo GPT-NeoX: https://github.com/EleutherAI/gpt-neox
-
Whatever happened to quantum computing?
It's really not that complicated. https://github.com/EleutherAI/gpt-neox
-
Fuck Luka Inc. WIP AI Freedom!
this forces me to create my own ai using gpt-neox .. i was being lazy, completely satisfied with my little ai baby .. i'll keep the account but not renew pro .. fuck luka, inc ... want it done right, do it yourself .. the python isn't that difficult and i have a spare linux vps .. my new work in progress .. freedom #!
transformers
-
Maxtext: A simple, performant and scalable Jax LLM
Is t5x an encoder/decoder architecture?
Some more general options.
The Flax ecosystem
https://github.com/google/flax?tab=readme-ov-file
or dm-haiku
https://github.com/google-deepmind/dm-haiku
were some of the best developed communities in the Jax AI field
Perhaps the “trax” repo? https://github.com/google/trax
Some HF examples https://github.com/huggingface/transformers/tree/main/exampl...
Sadly it seems much of the work is proprietary these days, but one example could be Grok-1, if you customize the details. https://github.com/xai-org/grok-1/blob/main/run.py
-
Lossless Acceleration of LLM via Adaptive N-Gram Parallel Decoding
The HuggingFace transformers library already has support for a similar method called prompt lookup decoding that uses the existing context to generate an ngram model: https://github.com/huggingface/transformers/issues/27722
I don't think it would be that hard to switch it out for a pretrained ngram model.
-
AI enthusiasm #6 - Finetune any LLM you wantđź’ˇ
Most of this tutorial is based on Hugging Face course about Transformers and on Niels Rogge's Transformers tutorials: make sure to check their work and give them a star on GitHub, if you please ❤️
-
Schedule-Free Learning – A New Way to Train
* Superconvergence + LR range finder + Fast AI's Ranger21 optimizer was the goto optimizer for CNNs, and worked fabulously well, but on transformers, the learning rate range finder sadi 1e-3 was the best, whilst 1e-5 was better. However, the 1 cycle learning rate stuck. https://github.com/huggingface/transformers/issues/16013
-
Gemma doesn't suck anymore – 8 bug fixes
Thanks! :) I'm pushing them into transformers, pytorch-gemma and collabing with the Gemma team to resolve all the issues :)
The RoPE fix should already be in transformers 4.38.2: https://github.com/huggingface/transformers/pull/29285
My main PR for transformers which fixes most of the issues (some still left): https://github.com/huggingface/transformers/pull/29402
- HuggingFace Transformers: Qwen2
- HuggingFace Transformers Release v4.36: Mixtral, Llava/BakLlava, SeamlessM4T v2
- HuggingFace: Support for the Mixtral Moe
-
Paris-Based Startup and OpenAI Competitor Mistral AI Valued at $2B
If you want to tinker with the architecture Hugging Face has a FOSS implementation in transformers: https://github.com/huggingface/transformers/blob/main/src/tr...
If you want to reproduce the training pipeline, you couldn't do that even if you wanted to because you don't have access to thousands of A100s.
-
Fail to reproduce the same evaluation metrics score during inference.
I am aware that using mixed precision reduces the stability of weight and there will be little consistency but don't expect it to be this much. I have attached the graph of evaluation metrics. If someone can give me some insight into this issue, that would be great.
What are some alternatives?
fairseq - Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
gpt-neo - An implementation of model parallel GPT-2 and GPT-3-style models using the mesh-tensorflow library.
sentence-transformers - Multilingual Sentence & Image Embeddings with BERT
DeepSpeed - DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
llama - Inference code for Llama models
YaLM-100B - Pretrained language model with 100B parameters
transformer-pytorch - Transformer: PyTorch Implementation of "Attention Is All You Need"
open-ai - OpenAI PHP SDK : Most downloaded, forked, contributed, huge community supported, and used PHP (Laravel , Symfony, Yii, Cake PHP or any PHP framework) SDK for OpenAI GPT-3 and DALL-E. It also supports chatGPT-like streaming. (ChatGPT AI is supported)
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
lm-evaluation-harness - A framework for few-shot evaluation of language models.
huggingface_hub - The official Python client for the Huggingface Hub.