DeepSpeed
llama
Our great sponsors
- ONLYOFFICE ONLYOFFICE Docs — document collaboration in your environment
- CodiumAI - TestGPT | Generating meaningful tests for busy devs
- Sonar - Write Clean Python Code. Always.
- InfluxDB - Access the most powerful time series database as a service
DeepSpeed | llama | |
---|---|---|
41 | 134 | |
25,088 | 22,199 | |
61.0% | 30.7% | |
9.6 | 7.6 | |
2 days ago | 12 days ago | |
Python | Python | |
Apache License 2.0 | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
DeepSpeed
-
Using --deepspeed requires lots of manual tweaking
Filed a discussion item on the deepspeed project: https://github.com/microsoft/DeepSpeed/discussions/3531
Solution: I don't know; this is where I am stuck. https://github.com/microsoft/DeepSpeed/issues/1037 suggests that I just need to 'apt install libaio-dev', but I've done that and it doesn't help.
-
Whether the ML computation engineering expertise will be valuable, is the question.
There could be some spectrum of this expertise. For instance, https://github.com/NVIDIA/FasterTransformer, https://github.com/microsoft/DeepSpeed
- FLiPN-FLaNK Stack Weekly for 17 April 2023
- DeepSpeed Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-Like Models
- DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-Like Models
-
12-Apr-2023 AI Summary
DeepSpeed Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales (https://github.com/microsoft/DeepSpeed/tree/master/blogs/deepspeed-chat)
- Microsoft DeepSpeed
-
Apple: Transformer architecture optimized for Apple Silicon
I'm following this closely, together with other efforts like GPTQ Quantization and Microsoft's DeepSpeed, all of which are bringing down the hardware requirements of these advanced AI models.
-
Facebook LLAMA is being openly distributed via torrents
- https://github.com/microsoft/DeepSpeed
Anything that could bring this to a 10GB 3080 or 24GB 3090 without 60s/it per token?
llama
-
UAE's Technology Innovation Institute Launches Open-Source "Falcon 40B" Large Language Model for Research & Commercial Utilization
It is the best open-source model currently available. Falcon-40B outperforms LLaMA, StableLM, RedPajama, MPT, etc. See the OpenLLM Leaderboard.
-
[D] High-quality, open-source implementations of LLMs
LLaMA [GitHub]
- PSA: There is no 30B LLaMA model; it was a typo. The actual model has 33B parameters; please stop referring to it as "LLaMA-30B."
-
New Llama 13B model from Nomic.AI : GPT4All-13B-Snoozy. Available on HF in HF, GPTQ and GGML
Base Model Repository: https://github.com/facebookresearch/llama
-
Leaked Google document: “We Have No Moat, And Neither Does OpenAI”
By the way, for people not familiar with the leak, Llama models are for research purposes only and you need to request the download link from Meta, a user on Github just commited a magnet link into the readme to "save bandwidth"...a proper chad, some might even compare the feat to Armstrong's first step on the moon lol.
-
Google “We Have No Moat, and Neither Does OpenAI”
There's a pull request in the official LLaMA repo that adds Magnet links for all the models to the README. Until these were uploaded to HuggingFace, this PR was the primary source for most people downloading the model.
https://github.com/facebookresearch/llama/pull/73/files
Two months later, Facebook hasn't merged the change, but they also haven't deleted it or tried to censor it in any way. I find that hard to explain unless the leak really was intentional; with pretty much any large company, this kind of thing would normally get killed on sight.
-
[N] OpenLLaMA: An Open Reproduction of LLaMA
If you have lots of available VRAM and a powerful GPU, the use the original llama inference code, which is actually open source.
-
DeepDoctection
I think local models SOTA is llama which has 2048 context[1].
-
[R] CodeCapybara: Another open source model for code generation based on instruction tuning, outperformed Llama and CodeAlpaca
exactly, that's what we are trying to figure out, most of the previous work they do not released the scripts for evaluation, only the pretrained model and the numbers in the paper, no one can actually reproduce the results but choose to trust the numbers are correct. There are similar issues here: https://github.com/facebookresearch/llama/issues/223
- Best datasets for local training?
What are some alternatives?
ColossalAI - Making large AI models cheaper, faster and more accessible
fairscale - PyTorch extensions for high performance and large scale training.
langchain - ⚡ Building applications with LLMs through composability ⚡
TensorRT - NVIDIA® TensorRT™, an SDK for high-performance deep learning inference, includes a deep learning inference optimizer and runtime that delivers low latency and high throughput for inference applications.
Megatron-LM - Ongoing research training transformer models at scale
text-generation-webui - A gradio web UI for running Large Language Models like LLaMA, llama.cpp, GPT-J, Pythia, OPT, and GALACTICA.
fairseq - Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
mesh-transformer-jax - Model parallel transformers in JAX and Haiku
gpt_index - LlamaIndex (GPT Index) is a project that provides a central interface to connect your LLM's with external data. [Moved to: https://github.com/jerryjliu/llama_index]
gpt-neox - An implementation of model parallel autoregressive transformers on GPUs, based on the DeepSpeed library.
alpaca-lora - Instruct-tune LLaMA on consumer hardware
KoboldAI-Client