examples

Deep Learning Examples (by LambdaLabsML)

Examples Alternatives

Similar projects and alternatives to examples

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a better examples alternative or higher similarity.

examples reviews and mentions

Posts with mentions or reviews of examples. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-11-11.
  • SD 1.4: Switching Clip with a new encoder
    1 project | /r/StableDiffusion | 23 Feb 2023
    Hello everyone, I am trying to fine tune a Stable Diffusion 1.4 model to work on specific images which requires specific descriptions. I am following this github, which is a fork of the original one, I have12000 images and I am at the 20th epoch with a 0.199 loss: https://github.com/LambdaLabsML/examples/tree/main/stable-diffusion-finetuning
  • Custom model training question
    1 project | /r/StableDiffusion | 28 Dec 2022
    It seems there are two ways: 1) Use Dreambooth technique (joepenna, Shivam's, lastben repos) 2) Train on top of original stable-diffusion model (as described for example here https://github.com/LambdaLabsML/examples/tree/main/stable-diffusion-finetuning, on XavierXiao repo)
  • Differences between a hypernetwork, embedding and Dreambooth models?
    1 project | /r/StableDiffusion | 10 Dec 2022
    If you want to make or customize a model: - there's fine-tuning a model (not Dreambooth). You're essentially continuing the training process that the SD authors used. It requires professional-grade AI hardware and takes a while. People seem to not even know this exists. You start with some base model (usually plain SD, but it could be any model) and fine-tune it. You should assume the process of fine-tuning it will make it unsuitable for anything else -- for instance, if you tune on one person's face, expect it to never generate anyone else's face, and if you find-tune on one art style, any other art style may suck. - Dreambooth is a different method for fine-tuning a model, needing a fraction of the power and time "real" fine-tuning does. But it still takes a lot of power, the most optimized Dreambooth tools take 12GB of VRAM and most graphics cards don't even have that. - There's several competitors to the Dreambooth method, such as EveryDream, which claim better results and sometimes claim only needing one photo. I'm not sure how things have really played out, especially since you can't tell the difference between "this method sucks" and "this method is great but everyone is using it wrong". - Hypernetworks take less time and power than DB. some testify it's better for style training than DreamBooth - TI's take the least time and power, I recently saw a training method for 6 GB VRAM cards. - Aesthetic gradients don't need training! :)
  • Can't clone from Huggingface?
    1 project | /r/MLQuestions | 2 Dec 2022
  • Huggingface cloning not working, more info inside
    1 project | /r/MachineLearning | 2 Dec 2022
  • Was told to crosspost here. My new D&D model!! Trained for 30,000 steps on 2500 manually labelled images. Questions and advice welcome!
    1 project | /r/dndai | 12 Nov 2022
    I BLIP captioned the images to try and retrain using this approach: https://github.com/LambdaLabsML/examples/blob/main/stable-diffusion-finetuning/pokemon_finetune.ipynb. I used the BLIP captions, and then put "D&D character {race}" in front, where race was the manually annotated race I did. After that, for dreambooth (followed this roughly: https://www.youtube.com/watch?v=7bVZDeGPv6I) You don't need to rename the images, just put them in the same folder, which you specify in a json file that Dreambooth reads to know how to handle each class
  • My new D&D model!! Trained for 30,000 steps on 2500 manually labelled images
    3 projects | /r/StableDiffusion | 11 Nov 2022
    Trained a Dreambooth model from v1.5 checkpoint. I tried finetuning the model using this approach: https://github.com/LambdaLabsML/examples/blob/main/stable-diffusion-finetuning/pokemon_finetune.ipynb, but I didn't achieve results I liked
  • How To Fine Tune Stable Diffusion: Naruto Character Edition
    2 projects | /r/StableDiffusion | 3 Nov 2022
    Thank you! This model training did not use dreambooth. Here is the reference repo I used, it is based on the original training repo for stable diffusion. Dreambooth is a more sophisticated framework and I am very interested in doing a side-by-side comparison against this model as a follow-up.
  • [D] DreamBooth Stable Diffusion training now possible in 24GB GPUs, and it runs about 2 times faster.
    2 projects | /r/MachineLearning | 26 Sep 2022
  • [P] Stable Diffusion finetuned on Pokemon!
    1 project | /r/MachineLearning | 21 Sep 2022
    Code and details: https://github.com/LambdaLabsML/examples/tree/main/stable-diffusion-finetuning
  • A note from our sponsor - InfluxDB
    www.influxdata.com | 5 May 2024
    Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality. Learn more →

Stats

Basic examples repo stats
12
788
7.2
3 months ago

LambdaLabsML/examples is an open source project licensed under MIT License which is an OSI approved license.

The primary programming language of examples is Jupyter Notebook.


Sponsored
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com