Selefra
DeepSpeed
Selefra | DeepSpeed | |
---|---|---|
36 | 51 | |
508 | 32,739 | |
0.8% | 1.6% | |
7.6 | 9.8 | |
8 months ago | 6 days ago | |
Go | Python | |
MPL-2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Selefra
-
A Better Version Is Released - Selefra v0.2.3
[Feature]Modules support filtering, while labels support customization of any format. by @FelixsJiang in #30
-
How to spot and troubleshoot AWS S3 bucket object traversal issues
Selefra Project Repository: github.com/selefra/selefra
-
April 2023
Analyze cloud resources using GPT (https://github.com/selefra/selefra)
-
Policy-as-code is recommended for managing cloud and SaaS services
Selefra: Selefra is an open-source Policy as Code tool that can use natural language to write rules for security compliance checks, cost configuration checks, and architecture rationality checks on current cloud services.
-
Using GPT to Analyze Cloud Security Issues for GCP
GitHub: https://github.com/selefra/selefra
Website: https://www.selefra.io/
-
Using GPT to Analyze Cloud Security Issues
Here, we strongly encourage you to try Selefra and enjoy a faster and more efficient cloud security analysis and resolution experience. You can find more information about Selefra on our official website (https://www.selefra.io/) or GitHub (https://github.com/selefra/selefra), or follow our Twitter account (https://twitter.com/SelefraCorp) for more real-time updates.
- Made a ChatGPT-powered AI Cloud insight open-source tools
-
Using Selefra GPT to Check whether GCP has architecture design defects
Check out and star GitHub: https://github.com/selefra/selefra
DeepSpeed
-
Can we discuss MLOps, Deployment, Optimizations, and Speed?
DeepSpeed can handle parallelism concerns, and even offload data/model to RAM, or even NVMe (!?) . I'm surprised I don't see this project used more.
- [P][D] A100 is much slower than expected at low batch size for text generation
- DeepSpeed-FastGen: High-Throughput for LLMs via MII and DeepSpeed-Inference
- DeepSpeed-FastGen: High-Throughput Text Generation for LLMs
- Why async gradient update doesn't get popular in LLM community?
- DeepSpeed Ulysses: System Optimizations for Enabling Training of Extreme Long Sequence Transformer Models (r/MachineLearning)
- [P] DeepSpeed Ulysses: System Optimizations for Enabling Training of Extreme Long Sequence Transformer Models
-
A comprehensive guide to running Llama 2 locally
While on the surface, a 192GB Mac Studio seems like a great deal (it's not much more than a 48GB A6000!), there are several reasons why this might not be a good idea:
* I assume most people have never used llama.cpp Metal w/ large models. It will drop to CPU speeds whenever the context window is full: https://github.com/ggerganov/llama.cpp/issues/1730#issuecomm... - while sure this might be fixed in the future, it's been an issue since Metal support was added, and is a significant problem if you are actually trying to actually use it for inferencing. With 192GB of memory, you could probably run larger models w/o quantization, but I've never seen anyone post benchmarks of their experiences. Note that at that point, the limited memory bandwidth will be a big factor.
* If you are planning on using Apple Silicon for ML/training, I'd also be wary. There are multi-year long open bugs in PyTorch[1], and most major LLM libs like deepspeed, bitsandbytes, etc don't have Apple Silicon support[2][3].
You can see similar patterns w/ Stable Diffusion support [4][5] - support lagging by months, lots of problems and poor performance with inference, much less fine tuning. You can apply this to basically any ML application you want (srt, tts, video, etc)
Macs are fine to poke around with, but if you actually plan to do more than run a small LLM and say "neat", especially for a business, recommending a Mac for anyone getting started w/ ML workloads is a bad take. (In general, for anyone getting started, unless you're just burning budget, renting cloud GPU is going to be the best cost/perf, although on-prem/local obviously has other advantages.)
[1] https://github.com/pytorch/pytorch/issues?q=is%3Aissue+is%3A...
[2] https://github.com/microsoft/DeepSpeed/issues/1580
[3] https://github.com/TimDettmers/bitsandbytes/issues/485
[4] https://github.com/AUTOMATIC1111/stable-diffusion-webui/disc...
[5] https://forums.macrumors.com/threads/ai-generated-art-stable...
-
Microsoft Research proposes new framework, LongMem, allowing for unlimited context length along with reduced GPU memory usage and faster inference speed. Code will be open-sourced
And https://github.com/microsoft/deepspeed
-
April 2023
DeepSpeed Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales (https://github.com/microsoft/DeepSpeed/tree/master/blogs/deepspeed-chat)
What are some alternatives?
ZeusCloud - Open Source Cloud Security
ColossalAI - Making large AI models cheaper, faster and more accessible
E2B - Secure cloud runtime for AI apps & AI agents. Fully open-source.
Megatron-LM - Ongoing research training transformer models at scale
Flowise - Drag & drop UI to build your customized LLM flow
fairscale - PyTorch extensions for high performance and large scale training.
salami - Infrastructure as Natural Language
TensorRT - NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
textSQL
accelerate - 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support
deepdoctection - A Repo For Document AI
fairseq - Facebook AI Research Sequence-to-Sequence Toolkit written in Python.