pytorch_clip_guided_loss
APDrawingGAN
pytorch_clip_guided_loss | APDrawingGAN | |
---|---|---|
2 | 1 | |
77 | 773 | |
- | - | |
0.0 | 0.0 | |
over 2 years ago | almost 2 years ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
pytorch_clip_guided_loss
-
[P] ClipRCNN: Tiny text-guided zero-shot object detector
This approach isn't perfect at all, but it is really simple and works after writing just a few lines of code. You can find our implementation of the ClipRCNN here: https://github.com/bes-dev/pytorch_clip_guided_loss/tree/master/examples/object_detection
-
The new library to make CLIP guided image generation simpler.
There are different ways to generate images by their text descriptions. But one of the most powerful approaches to generate synthetic art is CLIP guided image generation. We provide a new python library that incapsulates the whole logic of the CLIP guided loss into one PyTorch primitive with a simple API. We provide CLIP guided loss using different CLIP models (such as original CLIP models by OpenAI and ruCLIP model by SberAI), multiple prompts (texts or images) as targets for optimization, and automatic detection and translation of the input texts. Also, we provide our tiny implementation of the VQGAN-CLIP based on our library and VQVAE by SberAI (in my opinion, this is the best version of the VQGAN that is publicly available) to make text to image. Our library is all you need to integrate text-powered losses into your image synthesis pipelines by adding a few lines of code. You can find our library here (pypi package is available): https://github.com/bes-dev/pytorch_clip_guided_loss
APDrawingGAN
-
Image to hand drawn
Hi, I'm looking for more projects that will turn an image into a "hand drawn" image. These are the ones I've found so far. They are all based on the same dataset from APDrawingGAN. This is a scaled down image. The originals were generated at 1200px width (512 for APDrawGAN)
What are some alternatives?
ArtLine - A Deep Learning based project for creating line art portraits.
pix2pixHD - Synthesizing and manipulating 2048x1024 images with conditional GANs
MobileStyleGAN.pytorch - An official implementation of MobileStyleGAN in PyTorch
HR-VITON - Official PyTorch implementation for the paper High-Resolution Virtual Try-On with Misalignment and Occlusion-Handled Conditions (ECCV 2022).
concept-ablation - Ablating Concepts in Text-to-Image Diffusion Models (ICCV 2023)
fourier_feature_nets - Supplemental learning materials for "Fourier Feature Networks and Neural Volume Rendering"
anycost-gan - [CVPR 2021] Anycost GANs for Interactive Image Synthesis and Editing
ArtGAN - ArtGAN + WikiArt: This work presents a series of new approaches to improve GAN for conditional image synthesis and we name the proposed model as “ArtGAN”.
StyleSwin - [CVPR 2022] StyleSwin: Transformer-based GAN for High-resolution Image Generation
U-2-Net - The code for our newly accepted paper in Pattern Recognition 2020: "U^2-Net: Going Deeper with Nested U-Structure for Salient Object Detection."
2dimageto3dmodel - We evaluate our method on different datasets (including ShapeNet, CUB-200-2011, and Pascal3D+) and achieve state-of-the-art results, outperforming all the other supervised and unsupervised methods and 3D representations, all in terms of performance, accuracy, and training time.