bitsandbytes
fast-stable-diffusion
bitsandbytes | fast-stable-diffusion | |
---|---|---|
61 | 239 | |
5,447 | 7,316 | |
- | - | |
9.4 | 8.6 | |
5 days ago | 20 days ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
bitsandbytes
-
French AI startup Mistral secures €2B valuation
No. Without the inference code, the best we can have are guesses on its implementation, so the benchmark figures we can get could be quite wrong. It does seem better than Llama2-70B in my tests, which rely on the work done by Dmytro Dzhulgakov[0] and DiscoResearch[1].
But the point of releasing on bittorrent is to see the effervescence in hobbyist research and early attempts at MoE quantization, which are already ongoing[2]. They are benefitting from the community.
[0]: https://github.com/dzhulgakov/llama-mistral
[1]: https://huggingface.co/DiscoResearch/mixtral-7b-8expert
[2]: https://github.com/TimDettmers/bitsandbytes/tree/sparse_moe
-
Lora training with Kohya issue
CUDA SETUP: To manually override the PyTorch CUDA version please see:https://github.com/TimDettmers/bitsandbytes/blob/main/how_to_use_nonpytorch_cuda.md
- FLaNK Stack Weekly for 30 Oct 2023
-
A comprehensive guide to running Llama 2 locally
While on the surface, a 192GB Mac Studio seems like a great deal (it's not much more than a 48GB A6000!), there are several reasons why this might not be a good idea:
* I assume most people have never used llama.cpp Metal w/ large models. It will drop to CPU speeds whenever the context window is full: https://github.com/ggerganov/llama.cpp/issues/1730#issuecomm... - while sure this might be fixed in the future, it's been an issue since Metal support was added, and is a significant problem if you are actually trying to actually use it for inferencing. With 192GB of memory, you could probably run larger models w/o quantization, but I've never seen anyone post benchmarks of their experiences. Note that at that point, the limited memory bandwidth will be a big factor.
* If you are planning on using Apple Silicon for ML/training, I'd also be wary. There are multi-year long open bugs in PyTorch[1], and most major LLM libs like deepspeed, bitsandbytes, etc don't have Apple Silicon support[2][3].
You can see similar patterns w/ Stable Diffusion support [4][5] - support lagging by months, lots of problems and poor performance with inference, much less fine tuning. You can apply this to basically any ML application you want (srt, tts, video, etc)
Macs are fine to poke around with, but if you actually plan to do more than run a small LLM and say "neat", especially for a business, recommending a Mac for anyone getting started w/ ML workloads is a bad take. (In general, for anyone getting started, unless you're just burning budget, renting cloud GPU is going to be the best cost/perf, although on-prem/local obviously has other advantages.)
[1] https://github.com/pytorch/pytorch/issues?q=is%3Aissue+is%3A...
[2] https://github.com/microsoft/DeepSpeed/issues/1580
[3] https://github.com/TimDettmers/bitsandbytes/issues/485
[4] https://github.com/AUTOMATIC1111/stable-diffusion-webui/disc...
[5] https://forums.macrumors.com/threads/ai-generated-art-stable...
-
Bit inference 4.2x faster than 16 bit
Release notes: https://github.com/TimDettmers/bitsandbytes/releases/tag/0.4...
-
Found duplicate ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0']
Welcome to bitsandbytes. For bug reports, please run python -m bitsandbytes and submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues ================================================================================ bin /usr/local/lib/python3.10/dist-packages/bitsandbytes/libbitsandbytes_cpu.so /usr/local/lib/python3.10/dist-packages/bitsandbytes/libbitsandbytes_cpu.so: undefined symbol: cadam32bit_grad_fp32 CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching in backup paths... ERROR: /usr/bin/python3: undefined symbol: cudaRuntimeGetVersion CUDA SETUP: libcudart.so path is None CUDA SETUP: Is seems that your cuda installation is not in your path. See https://github.com/TimDettmers/bitsandbytes/issues/85 for more information. CUDA SETUP: CUDA version lower than 11 are currently not supported for LLM.int8(). You will be only to use 8-bit optimizers and quantization routines!! CUDA SETUP: Highest compute capability among GPUs detected: 7.5 CUDA SETUP: Detected CUDA version 00 CUDA SETUP: Loading binary /usr/local/lib/python3.10/dist-packages/bitsandbytes/libbitsandbytes_cpu.so... /usr/local/lib/python3.10/dist-packages/bitsandbytes/cextension.py:34: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable. warn("The installed version of bitsandbytes was compiled without GPU support. " /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: /usr/lib64-nvidia did not contain ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] as expected! Searching further paths... warn(msg) /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/sys/fs/cgroup/memory.events /var/colab/cgroup/jupyter-children/memory.events')} warn(msg) /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('http'), PosixPath('//172.28.0.1'), PosixPath('8013')} warn(msg) /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('//colab.research.google.com/tun/m/cc48301118ce562b961b3c22d803539adc1e0c19/gpu-t4-s-1b6gsytv7z9le --tunnel_background_save_delay=10s --tunnel_periodic_background_save_frequency=30m0s --enable_output_coalescing=true --output_coalescing_required=true'), PosixPath('--logtostderr --listen_host=172.28.0.12 --target_host=172.28.0.12 --tunnel_background_save_url=https')} warn(msg) /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/env/python')} warn(msg) /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('module'), PosixPath('//ipykernel.pylab.backend_inline')} warn(msg) /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: No libcudart.so found! Install CUDA or the cudatoolkit package (anaconda)!
-
Having trouble using the multimodal tools.
RuntimeError: CUDA Setup failed despite GPU being available. Inspect the CUDA SETUP outputs above to fix your environment! If you cannot find any issues and suspect a bug, please open an issue with detals about your environment: https://github.com/TimDettmers/bitsandbytes/issues
- [TextGen WebUI] Service terminated error? (Screenshots in post)
- Considering getting a Jetson AGX Orin.. anyone have experience with it?
-
How to disable the `bitsandbytes` intro message:
===================================BUG REPORT=================================== Welcome to bitsandbytes. For bug reports, please run python -m bitsandbytes and submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues ================================================================================ bin /usr/local/lib/python3.10/dist-packages/bitsandbytes/libbitsandbytes_cuda121.so CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching in backup paths... CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so CUDA SETUP: Highest compute capability among GPUs detected: 8.9 CUDA SETUP: Detected CUDA version 121 CUDA SETUP: Loading binary /usr/local/lib/python3.10/dist-packages/bitsandbytes/libbitsandbytes_cuda121.so...
fast-stable-diffusion
-
Working Colab notebooks for training Dreambooth?
I tried using TheLastBen's fast dreambooth trainer. I managed to train a ckpt file but I can't run it.
-
Running AUTOMATIC1111 on Google Colab
You have a colab from ThelastBen It uses to be thes best at the time when auto1111 was working in google colab free. https://github.com/TheLastBen/fast-stable-diffusion
- Stability AI releases its latest image-generating model, Stable Diffusion XL 1.0
-
Google Colab disconnects after 5 mins of hosting A1111
Using https://github.com/TheLastBen/fast-stable-diffusion
-
I'm kinda new to all of this and just wanted to ask... How can I fix something like this? Tried inpaint but didn't work even after changing parameters and img2img make it lose quality...
This repo offers a template how to start with SD on runpod https://github.com/TheLastBen/fast-stable-diffusion. But I know how to code, si I made my own solution.
-
Unable to use ControlNet on AUTO1111 GUI - Google Colab Notebook
I can confirm I'm using the latest version of the colab notebook of this repo (https://github.com/TheLastBen/fast-stable-diffusion). Anyone can point to any solutions to this problem? Thanks in advance!
- Automatic 1111 not working
-
Useful Links
TheLastBen's Fast DB SD Colabs, +25-50% speed increase, AUTOMATIC1111 + DreamBooth
-
Can you use other base model to train your own model with TheLastBen or ShivamShrirao collab?
CalledProcessError Traceback (most recent call last) in () 182 wget.download('https://github.com/TheLastBen/fast-stable-diffusion/raw/main/Dreambooth/det.py') 183 print('Detecting model version...') --> 184 Custom_Model_Version=check_output('python det.py '+sftnsr+' --MODEL_PATH '+MODEL_PATH, shell=True).decode('utf-8').replace('\n', '') 185 clear_output() 186 print(''+Custom_Model_Version+' Detected')
-
How to Install and Run Stable Diffusion in Automatic1111 with Deforum in Google Collab?
have you tried https://github.com/TheLastBen/fast-stable-diffusion ?
What are some alternatives?
GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ
DeepFaceLab - DeepFaceLab is the leading software for creating deepfakes.
accelerate - 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support
InvokeAI - InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products.
FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
stable-diffusion-tensorflow - Stable Diffusion in TensorFlow / Keras
Dreambooth-Stable-Diffusion-cpu - Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) with Stable Diffusion
efficient-dreambooth - [Moved to: https://github.com/smy20011/dreambooth-docker]
llama.cpp - LLM inference in C/C++
stable-diffusion-webui-docker - Easy Docker setup for Stable Diffusion with user-friendly UI
alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM
stable-diffusion - A latent text-to-image diffusion model