stable_diffusion_arc
tiny_llm_finetuner
stable_diffusion_arc | tiny_llm_finetuner | |
---|---|---|
4 | 3 | |
63 | 16 | |
- | - | |
7.8 | 6.2 | |
about 2 months ago | 6 months ago | |
Jupyter Notebook | Python | |
BSD 2-clause "Simplified" License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stable_diffusion_arc
-
Intel arc gpu price drop - inexpensive llama.cpp opencl inference accelerator?
Stable diffusion on Arc 770 - https://github.com/rahulunair/stable_diffusion_arc
-
Is the A770 (intel's gpu) capable of running stable diffusion with automatic1111?
Is it capable? Yes
-
[GPU] Intel Arc A770 16GB GDDR6 PCI Express 4.0 x16 Video Card + Call of Duty: Modern Warfare II bundle - $349.99
There are Intel extensions for sklearn, pytorch, and tensorflow. Someone got Stable Diffusion running on this GPU too, I'd love to see how it performs GitHub
-
Does AUTOMATIC1111's Stable Diffusion dist work with Intel Arc GPU?
Greetings stable diffusion community! I have a question and you may have my answers. I'm looking for any user that is either using OR is willing to try and run AUTOMATIC1111's Stable-Diffusion-gui combined with intel's Arc A770 16gb model card. I'm looking to add a second GPU to my rig and want it to be something relatively fast and have a high vram capacity. This card fits the bill and there is a github repo about having it run on a linux machine. Unfortunately I am a windows user and not very experienced in coding so I can't confirm it will work. I'd like to refrain from purchasing the card until I know for certain it will work on my OS and with the SD version.
tiny_llm_finetuner
-
Finetuning openLLAMA on Intel discrete GPUS
A finetuner for LLMs on Intel XPU devices, with which you could finetune the openLLaMA-3b model to sound like your favorite book.https://github.com/rahulunair/tiny_llm_finetuner
- LLM finetuning on Intel discrete GPUs
-
Intel arc gpu price drop - inexpensive llama.cpp opencl inference accelerator?
Llm fine tuning using Lora on intel dgpus : https://github.com/rahulunair/tiny_llm_finetuner
What are some alternatives?
RealScaler - RealScaler - image/video AI upscaler app (Real-ESRGAN)
tiny_llm_finetuning - LLM finetuning on Intel XPUs - LoRA on intel discrete GPUs [Moved to: https://github.com/rahulunair/tiny_llm_finetuner]
llama2.openvino - This sample shows how to implement a llama-based model with OpenVINO runtime
scikit-learn-intelex - Intel(R) Extension for Scikit-learn is a seamless way to speed up your Scikit-learn application
stable-diffusion-webui - Stable Diffusion web UI
optimum - 🚀 Accelerate training and inference of 🤗 Transformers and 🤗 Diffusers with easy to use hardware optimization tools
pyprobml - Python code for "Probabilistic Machine learning" book by Kevin Murphy
QualityScaler - QualityScaler - image/video deeplearning upscaling for any GPU
nlp-tutorial - Natural Language Processing Tutorial for Deep Learning Researchers
DeepLearningExamples - State-of-the-Art Deep Learning scripts organized by models - easy to train and deploy with reproducible accuracy and performance on enterprise-grade infrastructure.