grok-1
EasyLM
grok-1 | EasyLM | |
---|---|---|
8 | 8 | |
48,188 | 2,247 | |
4.5% | - | |
5.9 | 7.7 | |
7 days ago | 4 months ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
grok-1
-
Maxtext: A simple, performant and scalable Jax LLM
Is t5x an encoder/decoder architecture?
Some more general options.
The Flax ecosystem
https://github.com/google/flax?tab=readme-ov-file
or dm-haiku
https://github.com/google-deepmind/dm-haiku
were some of the best developed communities in the Jax AI field
Perhaps the “trax” repo? https://github.com/google/trax
Some HF examples https://github.com/huggingface/transformers/tree/main/exampl...
Sadly it seems much of the work is proprietary these days, but one example could be Grok-1, if you customize the details. https://github.com/xai-org/grok-1/blob/main/run.py
-
Elon Musk's xAI previews Grok-1.5V, its first multimodal model
Anyone know the system requirements? Anyone even able to run it? In their last release "grok 1" the issues are full of people who can't even run it: https://github.com/xai-org/grok-1/issues
- Grok-1 Weights Published
- Grok-1
- FLaNK AI Weekly 18 March 2024
- X.ai's Grok-1 Model Is Officially Open-Source and Larger Than Expected
- Grok-1 (LLM with 314B parameters) is now source
- Elon drops open source Grok onto the stage
EasyLM
- Maxtext: A simple, performant and scalable Jax LLM
- How To Fine-Tune LLaMA, OpenLLaMA, And XGen, With JAX On A GPU Or A TPU
-
Open-sourced LLMs are adept at mimicking ChatGPT’s style but not its factuality. There exists a substantial capabilities gap, which requires better base LM.
Title: The False Promise of Imitating Proprietary LLLs Authors: Arnav Gudibande, Eric Wallace, Charlie Snell, Xinyang Geng, Hao Liu, Pieter Abbeel, Sergey Levine, Dawn Song Word Count: 3400 Average Reading Time: 18-20 minutes Source Code: https://github.com/young-geng/EasyLM Additional Links: https://huggingface.co/young-geng/koala-eval, https://huggingface.co/young-geng/koala
-
Paid dev gig: develop a basic LLM PEFT finetuning utility
Check out easyLM https://github.com/young-geng/EasyLM
-
OpenLLaMA Releases 7B/3B Checkpoints with 700B/600B Tokens
We release the weights in two formats: an EasyLM format to be use with our EasyLM framework, and a PyTorch format to be used with the Hugging Face transformers library.
-
OpenLLaMA: An Open Reproduction of LLaMA
I am quite new to this, I would like to get it running. Would the process roughly be:
1. Get a machine with decent GPU, probably rent cloud GPU.
2. On that machine download the weights/model/vocab files from https://huggingface.co/openlm-research/open_llama_7b_preview...
3. Install Anaconda. Clone https://github.com/young-geng/EasyLM/.
4. Install EasyLM:
conda env create -f scripts/gpu_environment.yml
- Koala: A Dialogue Model for Academic Research [Finetuned Llama-13B on a dataset generated by ChatGPT]
What are some alternatives?
FLaNK-python-processors - Many processors
mlc-llm - Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.
rnote - Sketch and take handwritten notes.
camel - 🐫 CAMEL: Communicative Agents for “Mind” Exploration of Large Language Model Society (NeruIPS'2023) https://www.camel-ai.org
pytorch-image-models - PyTorch image models, scripts, pretrained weights -- ResNet, ResNeXT, EfficientNet, NFNet, Vision Transformer (ViT), MobileNet-V3/V2, RegNet, DPN, CSPNet, Swin Transformer, MaxViT, CoAtNet, ConvNeXt, and more
Open-Llama - The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.
sqlglot - Python SQL Parser and Transpiler
brev-cli - Connect your laptop to cloud computers. Follow to stay updated about our product
quarto-cli - Open-source scientific and technical publishing system built on Pandoc.
RWKV-LM - RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
openvino_notebooks - 📚 Jupyter notebook tutorials for OpenVINO™
modal-examples - Examples of programs built using Modal