fickling
DeepSpeed
Our great sponsors
fickling | DeepSpeed | |
---|---|---|
7 | 51 | |
327 | 32,550 | |
22.3% | 3.2% | |
8.4 | 9.8 | |
3 days ago | 7 days ago | |
Python | Python | |
GNU Lesser General Public License v3.0 only | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
fickling
- Fickling – A Python pickling decompiler and static analyzer
- ⚠️WARNING⚠️ never open a .ckpt file without knowing exactly what's inside (especially SDXL)
-
Facebook LLAMA is being openly distributed via torrents
You're right! You should probably use Trail of Bits Fickling tool to investigate. https://github.com/trailofbits/fickling
-
Safety of downloading random checkpoints
I tested the Anything V3 pruned from Hugging Face, and indeed nothing funny in its pickle. I used the Fickling library to decompile it. I do not use Windows so my interests in .ckpt security are largely related to Pickle exploits— which could extract malicious code from a data file and then do something with it, but the data files themselves are not executed. I will edit this comment with lines referencing that data file.
-
Draw Things, Stable Diffusion in your pocket, 100% offline and free
I've been using Diffusion Bee on my Mac, and it's just gained the ability to import models (which it converts), but it is unpickling to do so— but barely. It unpickles, figures out what sort of data is in every data file and then computes what it wants from them on its own. I would love it to not use unpickling at all, so my intention is if I can figure it out, to write a script to decode the pickle file (with Fickling or otherwise) and then just do the weight calculation/assignment.
- Novel AI models allegedly leaked.
-
Never a dill moment: Exploiting machine learning pickle files
Something you won't gather from skim-reading the headline is that this is that the author has also created a tool, Fickling: https://github.com/trailofbits/fickling - to aid in playing around with pickle files.
From the article: [Fickling] can help you reverse engineer, test, and even create malicious pickle files.
DeepSpeed
-
Can we discuss MLOps, Deployment, Optimizations, and Speed?
DeepSpeed can handle parallelism concerns, and even offload data/model to RAM, or even NVMe (!?) . I'm surprised I don't see this project used more.
- [P][D] A100 is much slower than expected at low batch size for text generation
- DeepSpeed-FastGen: High-Throughput for LLMs via MII and DeepSpeed-Inference
- DeepSpeed-FastGen: High-Throughput Text Generation for LLMs
- Why async gradient update doesn't get popular in LLM community?
- DeepSpeed Ulysses: System Optimizations for Enabling Training of Extreme Long Sequence Transformer Models (r/MachineLearning)
- [P] DeepSpeed Ulysses: System Optimizations for Enabling Training of Extreme Long Sequence Transformer Models
-
A comprehensive guide to running Llama 2 locally
While on the surface, a 192GB Mac Studio seems like a great deal (it's not much more than a 48GB A6000!), there are several reasons why this might not be a good idea:
* I assume most people have never used llama.cpp Metal w/ large models. It will drop to CPU speeds whenever the context window is full: https://github.com/ggerganov/llama.cpp/issues/1730#issuecomm... - while sure this might be fixed in the future, it's been an issue since Metal support was added, and is a significant problem if you are actually trying to actually use it for inferencing. With 192GB of memory, you could probably run larger models w/o quantization, but I've never seen anyone post benchmarks of their experiences. Note that at that point, the limited memory bandwidth will be a big factor.
* If you are planning on using Apple Silicon for ML/training, I'd also be wary. There are multi-year long open bugs in PyTorch[1], and most major LLM libs like deepspeed, bitsandbytes, etc don't have Apple Silicon support[2][3].
You can see similar patterns w/ Stable Diffusion support [4][5] - support lagging by months, lots of problems and poor performance with inference, much less fine tuning. You can apply this to basically any ML application you want (srt, tts, video, etc)
Macs are fine to poke around with, but if you actually plan to do more than run a small LLM and say "neat", especially for a business, recommending a Mac for anyone getting started w/ ML workloads is a bad take. (In general, for anyone getting started, unless you're just burning budget, renting cloud GPU is going to be the best cost/perf, although on-prem/local obviously has other advantages.)
[1] https://github.com/pytorch/pytorch/issues?q=is%3Aissue+is%3A...
[2] https://github.com/microsoft/DeepSpeed/issues/1580
[3] https://github.com/TimDettmers/bitsandbytes/issues/485
[4] https://github.com/AUTOMATIC1111/stable-diffusion-webui/disc...
[5] https://forums.macrumors.com/threads/ai-generated-art-stable...
-
Microsoft Research proposes new framework, LongMem, allowing for unlimited context length along with reduced GPU memory usage and faster inference speed. Code will be open-sourced
And https://github.com/microsoft/deepspeed
-
April 2023
DeepSpeed Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales (https://github.com/microsoft/DeepSpeed/tree/master/blogs/deepspeed-chat)
What are some alternatives?
swift-diffusion
ColossalAI - Making large AI models cheaper, faster and more accessible
diffusionbee-stable-diffusion-ui - Diffusion Bee
Megatron-LM - Ongoing research training transformer models at scale
safer_unpickle
fairscale - PyTorch extensions for high performance and large scale training.
sd-webui-model-converter - model convert extension for stable-diffusion-webui. supports convert fp16/bf16 no-ema/ema-only safetensors
TensorRT - NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
accelerate - 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support
fairseq - Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
mesh-transformer-jax - Model parallel transformers in JAX and Haiku
llama - Inference code for Llama models