DeepSpeed
phasellm
DeepSpeed | phasellm | |
---|---|---|
51 | 14 | |
32,834 | 442 | |
1.6% | - | |
9.8 | 8.9 | |
about 20 hours ago | 3 months ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
DeepSpeed
-
Can we discuss MLOps, Deployment, Optimizations, and Speed?
DeepSpeed can handle parallelism concerns, and even offload data/model to RAM, or even NVMe (!?) . I'm surprised I don't see this project used more.
- [P][D] A100 is much slower than expected at low batch size for text generation
- DeepSpeed-FastGen: High-Throughput for LLMs via MII and DeepSpeed-Inference
- DeepSpeed-FastGen: High-Throughput Text Generation for LLMs
- Why async gradient update doesn't get popular in LLM community?
- DeepSpeed Ulysses: System Optimizations for Enabling Training of Extreme Long Sequence Transformer Models (r/MachineLearning)
- [P] DeepSpeed Ulysses: System Optimizations for Enabling Training of Extreme Long Sequence Transformer Models
-
A comprehensive guide to running Llama 2 locally
While on the surface, a 192GB Mac Studio seems like a great deal (it's not much more than a 48GB A6000!), there are several reasons why this might not be a good idea:
* I assume most people have never used llama.cpp Metal w/ large models. It will drop to CPU speeds whenever the context window is full: https://github.com/ggerganov/llama.cpp/issues/1730#issuecomm... - while sure this might be fixed in the future, it's been an issue since Metal support was added, and is a significant problem if you are actually trying to actually use it for inferencing. With 192GB of memory, you could probably run larger models w/o quantization, but I've never seen anyone post benchmarks of their experiences. Note that at that point, the limited memory bandwidth will be a big factor.
* If you are planning on using Apple Silicon for ML/training, I'd also be wary. There are multi-year long open bugs in PyTorch[1], and most major LLM libs like deepspeed, bitsandbytes, etc don't have Apple Silicon support[2][3].
You can see similar patterns w/ Stable Diffusion support [4][5] - support lagging by months, lots of problems and poor performance with inference, much less fine tuning. You can apply this to basically any ML application you want (srt, tts, video, etc)
Macs are fine to poke around with, but if you actually plan to do more than run a small LLM and say "neat", especially for a business, recommending a Mac for anyone getting started w/ ML workloads is a bad take. (In general, for anyone getting started, unless you're just burning budget, renting cloud GPU is going to be the best cost/perf, although on-prem/local obviously has other advantages.)
[1] https://github.com/pytorch/pytorch/issues?q=is%3Aissue+is%3A...
[2] https://github.com/microsoft/DeepSpeed/issues/1580
[3] https://github.com/TimDettmers/bitsandbytes/issues/485
[4] https://github.com/AUTOMATIC1111/stable-diffusion-webui/disc...
[5] https://forums.macrumors.com/threads/ai-generated-art-stable...
-
Microsoft Research proposes new framework, LongMem, allowing for unlimited context length along with reduced GPU memory usage and faster inference speed. Code will be open-sourced
And https://github.com/microsoft/deepspeed
-
April 2023
DeepSpeed Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales (https://github.com/microsoft/DeepSpeed/tree/master/blogs/deepspeed-chat)
phasellm
-
Ask HN: Any recommended AI tools to analyze data and generate insights?
If you're looking for an open source solution you can customize, check out the ResearchLLM demo: https://phasellm.com/researchllm
Code: https://github.com/wgryc/phasellm/tree/main/demos-and-produc...
- PhaseLLM Eval: run batch LLM jobs and evals via visual front-end (MIT licensed)
-
To everyone who is using alternative bots (e.g. Claude) - your comparisons?
Using Claude, Cohere, GPT-4, OpenAssistant. Formally swapping between them using PhaseLLM (open source library similar to LangChain).
-
April 2023
Large language model evaluation and workflow framework from Phase AI. (https://github.com/wgryc/phasellm)
- Ask HN: Freelancer? Seeking freelancer? (June 2023)
-
ResearchGPT: Automated Data Analysis and Interpretation
Fantastic questions! Re: working/not working at times -- this is still an issue. It's why I'm building PhaseLLM more broadly (https://github.com/wgryc/phasellm) -- need a robust pipeline that can also "reset" parts of itself if an LLM makes errors or mistakes.
You can see my prompts in this file: https://github.com/wgryc/phasellm/blob/main/demos-and-produc... I autogenerate a fairly big starting prompt and keep resubmitting it. It describes the data set extensively, which helps quite a bit.
That being said, a lot more can be done here around prompt optimization + making this more robust.
- ResearchGPT: LLMs to write stats code, analyze, and interpret results for you
-
Best way to use GPT offline with own content?
That being said, you might want to actually run head-to-head tests between models. PhaseLLM (free, open source) allows you to build a workflow and plug and play various models (including Dolly 2.0 and GPT-4). Then you can run tests to see how much worse/better the various LLMs are and if that's acceptable for your use case.
-
12-Apr-2023 AI Summary
Large language model evaluation and workflow framework from Phase AI. (https://github.com/wgryc/phasellm)
- PhaseLLM: Standardized Chat LLM API (Cohere, Claude, GPT) + Evaluation Framework
What are some alternatives?
ColossalAI - Making large AI models cheaper, faster and more accessible
awesome-chatgpt - 🧠A curated list of awesome ChatGPT resources, including libraries, SDKs, APIs, and more. 🌟 Please consider supporting this project by giving it a star.
Megatron-LM - Ongoing research training transformer models at scale
telegram-chatgpt-concierge-bot - Interact with OpenAI's ChatGPT via Telegram and Voice.
fairscale - PyTorch extensions for high performance and large scale training.
rel-events - The relevant React Events Library.
TensorRT - NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
kivy - Open source UI framework written in Python, running on Windows, Linux, macOS, Android and iOS
accelerate - 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support
prompt-engineering - ChatGPT Prompt Engineering for Developers - deeplearning.ai
fairseq - Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
Flowise - Drag & drop UI to build your customized LLM flow