guided-diffusion-keras
ImageReward
guided-diffusion-keras | ImageReward | |
---|---|---|
1 | 1 | |
75 | 952 | |
- | 5.5% | |
5.7 | 6.7 | |
4 months ago | 8 months ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
guided-diffusion-keras
-
Train Text to Image Diffusion Models in Keras
Repo here: https://github.com/apapiu/guided-diffusion-keras
ImageReward
-
Results of finetuning Avalon TRUvision v2 with image scoring
I used Image Reward repo to score generated imaged during training and modified loss function to take score into account.
What are some alternatives?
Diffusion-Models-Papers-Survey-Taxonomy - Diffusion model papers, survey, and taxonomy
WebGLM - WebGLM: An Efficient Web-enhanced Question Answering System (KDD 2023)
e4t-diffusion - Implementation of Encoder-based Domain Tuning for Fast Personalization of Text-to-Image Models
minimal-text-diffusion - A minimal implementation of diffusion models for text generation
MeshDiffusion - Official implementation of "MeshDiffusion: Score-based Generative 3D Mesh Modeling" (ICLR 2023 Spotlight)
LLMSurvey - The official GitHub page for the survey paper "A Survey of Large Language Models".
TextRL - Implementation of ChatGPT RLHF (Reinforcement Learning with Human Feedback) on any generation model in huggingface's transformer (blommz-176B/bloom/gpt/bart/T5/MetaICL)
safe-rlhf - Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback
argilla - Argilla is a collaboration platform for AI engineers and domain experts that require high-quality outputs, full data ownership, and overall efficiency.