Managing outdated pull requests is time-consuming. Mergify's Merge Queue automates your pull request management & merging. It's fully integrated to GitHub & coordinated with any CI. Start focusing on code. Try Mergify for free. Learn more →
Bitsandbytes Alternatives
Similar projects and alternatives to bitsandbytes
-
text-generation-webui
A Gradio web UI for Large Language Models. Supports transformers, GPTQ, llama.cpp (GGUF), Llama models.
-
-
Mergify
Tired of breaking your main and manually rebasing outdated pull requests?. Managing outdated pull requests is time-consuming. Mergify's Merge Queue automates your pull request management & merging. It's fully integrated to GitHub & coordinated with any CI. Start focusing on code. Try Mergify for free.
-
FastChat
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
-
-
-
accelerate
🚀 A simple way to train and use PyTorch models with multi-GPU, TPU, mixed-precision
-
DeepFaceLab
DeepFaceLab is the leading software for creating deepfakes.
-
InfluxDB
Collect and Analyze Billions of Data Points in Real Time. Manage all types of time series data in a single, purpose-built database. Run at any scale in any environment in the cloud, on-premises, or at the edge.
-
-
Dreambooth-Stable-Diffusion-cpu
Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) with Stable Diffusion
-
diffusers
🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch
-
-
-
Cold-Diffusion-Models
Official implementation of Cold-Diffusion for different transformations in pytorch.
-
-
PeRFception
[NeurIPS2022] Official implementation of PeRFception: Perception using Radiance Fields.
-
Intrusion-Detection-System-Using-Machine-Learning
Code for IDS-ML: intrusion detection system development using machine learning algorithms (Decision tree, random forest, extra trees, XGBoost, stacking, k-means, Bayesian optimization..)
-
-
intel-extension-for-pytorch
A Python package for extending the official PyTorch that can easily obtain performance on Intel platform
-
Awesome-Dataset-Distillation
Awesome Dataset Distillation Papers
-
Sonar
Write Clean Python Code. Always.. Sonar helps you commit clean code every time. With over 225 unique rules to find Python bugs, code smells & vulnerabilities, Sonar finds the issues while you focus on the work.
bitsandbytes reviews and mentions
-
A comprehensive guide to running Llama 2 locally
While on the surface, a 192GB Mac Studio seems like a great deal (it's not much more than a 48GB A6000!), there are several reasons why this might not be a good idea:
* I assume most people have never used llama.cpp Metal w/ large models. It will drop to CPU speeds whenever the context window is full: https://github.com/ggerganov/llama.cpp/issues/1730#issuecomm... - while sure this might be fixed in the future, it's been an issue since Metal support was added, and is a significant problem if you are actually trying to actually use it for inferencing. With 192GB of memory, you could probably run larger models w/o quantization, but I've never seen anyone post benchmarks of their experiences. Note that at that point, the limited memory bandwidth will be a big factor.
* If you are planning on using Apple Silicon for ML/training, I'd also be wary. There are multi-year long open bugs in PyTorch[1], and most major LLM libs like deepspeed, bitsandbytes, etc don't have Apple Silicon support[2][3].
You can see similar patterns w/ Stable Diffusion support [4][5] - support lagging by months, lots of problems and poor performance with inference, much less fine tuning. You can apply this to basically any ML application you want (srt, tts, video, etc)
Macs are fine to poke around with, but if you actually plan to do more than run a small LLM and say "neat", especially for a business, recommending a Mac for anyone getting started w/ ML workloads is a bad take. (In general, for anyone getting started, unless you're just burning budget, renting cloud GPU is going to be the best cost/perf, although on-prem/local obviously has other advantages.)
[1] https://github.com/pytorch/pytorch/issues?q=is%3Aissue+is%3A...
[2] https://github.com/microsoft/DeepSpeed/issues/1580
[3] https://github.com/TimDettmers/bitsandbytes/issues/485
[4] https://github.com/AUTOMATIC1111/stable-diffusion-webui/disc...
[5] https://forums.macrumors.com/threads/ai-generated-art-stable...
-
Considering getting a Jetson AGX Orin.. anyone have experience with it?
Do you by chance have any details on how to run oobagooba on the Orin? I keep running into this issue seemingly related to bitsandbytes.
-
Finetuning on multiple GPUs
If it also has QLoRA that would be the best but afaik it's not implemented in bitsandbytes yet?
- A new paper has been released, QLoRA, which is nothing short of game-changing for the ability to train and fine-tune LLMs on consumers' GPUs.
-
Anybody tried Lion: Adversarial Distillation of Closed-Source Large Language Model?
After looking in the bitsandbytes github i wanted to understand what the Added PagedLion and bf16 Lion. means :)
-
QLoRA: Efficient Finetuning of Quantized LLMs
Tim Dettmers is such a star. He's probably done more to make low-resource LLMs usable than anyone else.
First bitsandbytes[1] and now this.
-
GitHub - artidoro/qlora: QLORA: Efficient Finetuning of Quantized LLMs
it's in the current main branch on git, https://github.com/TimDettmers/bitsandbytes/blob/main/CHANGELOG.md, transformers same, https://github.com/huggingface/transformers/pull/23479
-
[D] About the current state of ROCm
ROCM Support · Issue #47 · TimDettmers/bitsandbytes
- My plan to run 30B models
-
A note from our sponsor - Mergify
blog.mergify.com | 22 Sep 2023
Stats
TimDettmers/bitsandbytes is an open source project licensed under MIT License which is an OSI approved license.
The primary programming language of bitsandbytes is Python.