Stable-Diffusion
ColossalAI
Stable-Diffusion | ColossalAI | |
---|---|---|
30 | 42 | |
1,760 | 37,951 | |
- | 1.3% | |
9.8 | 9.7 | |
6 days ago | 7 days ago | |
Jupyter Notebook | Python | |
GNU General Public License v3.0 only | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stable-Diffusion
- Scalable Load Balancing Having Cloud GPU Service Salad Tutorial With Whisper Transcriber Gradio APP
- FLaNK AI-April 22, 2024
-
OneTrainer Fine Tuning vs Kohya SS DreamBooth & Huge Research of OneTrainer’s Masked Training
So stay subscribed and open notification bells to not miss : https://www.youtube.com/SECourses
-
Finding Best Training Hyper Parameters / Configuration Is Neither Cheap Nor Easy
You can use A6000 GPU on MassedCompute with our template for only 31 cents per hour. Follow instructions here (still WIP) : https://github.com/FurkanGozukara/Stable-Diffusion/blob/main/Tutorials/OneTrainer-Master-SD-1_5-SDXL-Windows-Cloud-Tutorial.md
-
Compared Effect Of Image Captioning For SDXL Fine-tuning / DreamBooth Training for a Single Person, 10.3 GB VRAM via OneTrainer
The tutorial will be on our channel : https://www.youtube.com/SECourses
-
A New Gold Tutorial For RunPod & Linux Users : How To Use Storage Network Volume In RunPod & Latest Version Of Automatic1111
Patreon exclusive posts index
- SUPIR Full Tutorial + 1 Click 12GB VRAM Windows & RunPod / Linux Installer + Batch Upscale + Comparison With Magnific
-
Beware When Buying M2 NVMe SSDs: Netac NV7000, Kioxia Exceria Plus G2, Kingston and Sandisk Compared
Used Writing Speed & Cache Testing Python Script ⤵️ https://github.com/FurkanGozukara/Stable-Diffusion/blob/main/CustomPythonScripts/gen_file.py
- Viral Paper Tested MagicAnimate: Temporally Consistent Human Image Animation using Diffusion Model
-
56 Stable Diffusion And Related Generative AI Tutorials Organized List
Our 1,200+ Stars GitHub Stable Diffusion and other tutorials repo ⤵️ https://github.com/FurkanGozukara/Stable-Diffusion
ColossalAI
- FLaNK AI-April 22, 2024
- Making large AI models cheaper, faster and more accessible
-
ColossalChat: An Open-Source Solution for Cloning ChatGPT with a RLHF Pipeline
> open-source a complete RLHF pipeline ... based on the LLaMA pre-trained model
I've gotten to where when I see "open source AI" I now know it's "well, except for $some_other_dependencies"
Anyway: https://scribe.rip/@yangyou_berkeley/colossalchat-an-open-so... and https://github.com/hpcaitech/ColossalAI#readme (Apache 2) can save you some medium.com heartache at least
-
Meet ColossalChat: An Open-Source AI Solution For Cloning ChatGPT With A Complete RLHF Pipeline
Quick Read: https://www.marktechpost.com/2023/04/01/meet-colossalchat-an-open-source-ai-solution-for-cloning-chatgpt-with-a-complete-rlhf-pipeline/ Github: https://github.com/hpcaitech/ColossalAI Examples: https://chat.colossalai.org/
-
A top AI researcher reportedly left Google for OpenAI after sharing concerns the company was training Bard on ChatGPT data
One of the current methods for training competing models is to have ChatGPT literally create prompt -> completion data sets. That's what was used for https://github.com/hpcaitech/ColossalAI. A model based off of the Llama weights released by facebook, then fine tuned on ChatGPT3.5 prompt + completions. So yes, there is a good chance that google is literally using ChatGPT in the training loop.
- Colossal-AI: open-source RLHF pipeline based on LLaMA pre-trained model
- ColossalChat
-
ColossalChat: An Open-Source Solution for Cloning ChatGPT with RLHF Pipeline
Here's the github from the article:
https://github.com/hpcaitech/ColossalAI
-
Open source solution replicates ChatGPT training process
The article talks about their RLHF implementation briefly. There’s details on their RLHF implementation here: https://github.com/hpcaitech/ColossalAI/blob/a619a190df71ea3...
-
how can I make my own chatGPT?
Here’s the project on GitHub: https://github.com/hpcaitech/ColossalAI
What are some alternatives?
sd-dynamic-thresholding - Dynamic Thresholding (CFG Scale Fix) for Stable Diffusion (StableSwarmUI, ComfyUI, and Auto WebUI)
DeepSpeed - DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
Fooocus - Focus on prompting and generating
Megatron-LM - Ongoing research training transformer models at scale
multidiffusion-upscaler-for-automatic1111 - Tiled Diffusion and VAE optimize, licensed under CC BY-NC-SA 4.0
determined - Determined is an open-source machine learning platform that simplifies distributed training, hyperparameter tuning, experiment tracking, and resource management. Works with PyTorch and TensorFlow.
SUPIR - SUPIR aims at developing Practical Algorithms for Photo-Realistic Image Restoration In the Wild
fairscale - PyTorch extensions for high performance and large scale training.
caption-upsampling - This repository implements the idea of "caption upsampling" from DALL-E 3 with Zephyr-7B and gathers results with SDXL.
DeepFaceLive - Real-time face swap for PC streaming or video calls
CushyStudio - 🛋 The AI and Generative Art platform for everyone
PaddlePaddle - PArallel Distributed Deep LEarning: Machine Learning Framework from Industrial Practice (『飞桨』核心框架,深度学习&机器学习高性能单机、分布式训练和跨平台部署)