descript-audio-codec VS Anime2Sketch

Compare descript-audio-codec vs Anime2Sketch and see what are their differences.

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
descript-audio-codec Anime2Sketch
2 7
917 1,889
5.9% -
4.5 3.4
about 2 months ago 9 months ago
Python Python
MIT License MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

descript-audio-codec

Posts with mentions or reviews of descript-audio-codec. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-08.
  • Show HN: Sonauto – a more controllable AI music creator
    1 project | news.ycombinator.com | 10 Apr 2024
    Hey HN,

    My cofounder (four months ago, classmate) and I trained an AI music generation model and after a month of testing we're launching 1.0 today. Ours is interesting because it's a latent diffusion model instead of a language model, which makes it more controllable: https://sonauto.ai/

    Others do music generation by training a Vector Quantized Variational Autoencoder like Descript Audio Codec (https://github.com/descriptinc/descript-audio-codec) to turn music into tokens, then training an LLM on those tokens. Instead, we ripped the tokenization part off and replaced it with a normal variational autoencoder bottleneck (along with some other important changes to enable insane compression ratios). This gave us a nice, normally distributed latent space on which to train a diffusion transformer (like Sora). Our diffusion model is also particularly interesting because it is the first audio diffusion model to generate coherent lyrics!

    We like diffusion models for music generation because they have some interesting properties that make controlling them easier (so you can make your own music instead of just taking what the machine gives you). For example, we have a rhythm control mode where you can upload your own percussion line or set a BPM. Very soon you'll also be able to generate proper variations of an uploaded or previously generated song (e.g., you could even sing into Voice Memos for a minute and upload that!). @Musicians of HN, try uploading your songs and using Rhythm Control/let us know what you think! Our goal is to enable more of you, not replace you.

    For example, we turned this drum line (https://sonauto.ai/songs/uoTKycBghUBv7wA2YfNz) into this full song (https://sonauto.ai/songs/KSK7WM1PJuz1euhq6lS7 skip to 1:05 if inpatient) or this other song I like better (https://sonauto.ai/songs/qkn3KYv0ICT9kjWTmins we accidentally compressed it with AAC instead of Opus which hurt quality, though)

    We also like diffusion models because while they're expensive to train, they're cheap to serve. We built our own efficient inference infrastructure instead of using those expensive inference as a service startups that are all the rage. That's why we're making generations on our site FREE and UNLIMITED for as long as possible.

    We'd love to answer your questions. Let us know what you think of our first model! https://sonauto.ai/

  • TSAC: Low Bitrate Audio Compression
    4 projects | news.ycombinator.com | 8 Apr 2024
    Another useful model to compare to would be DAC https://github.com/descriptinc/descript-audio-codec

    This is the codec that TSAC extended, so it could be a nice comparison to see. I'd also echo Vocos (from sibling comment), it operates on the same Encodec tokens but generally has better reconstruction quality.

Anime2Sketch

Posts with mentions or reviews of Anime2Sketch. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2021-05-06.

What are some alternatives?

When comparing descript-audio-codec and Anime2Sketch you can also consider the following projects:

U-2-Net - The code for our newly accepted paper in Pattern Recognition 2020: "U^2-Net: Going Deeper with Nested U-Structure for Salient Object Detection."

UGATIT - Official Tensorflow implementation of U-GAT-IT: Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation (ICLR 2020)

GANime - A deep learning program that automatically generates colorized anime characters based on sketch drawings.

PromptGallery-stable-diffusion-webui - A prompt cookbook worked as stable-diffusion-webui extenstions.

StyleSwin - [CVPR 2022] StyleSwin: Transformer-based GAN for High-resolution Image Generation

anime-face-detector - Anime Face Detector using mmdet and mmpose

clean-fid - PyTorch - FID calculation with proper image resizing and quantization steps [CVPR 2022]

pytorch-pretrained-BigGAN - 🦋A PyTorch implementation of BigGAN with pretrained weights and conversion scripts.

vision-aided-gan - Ensembling Off-the-shelf Models for GAN Training (CVPR 2022 Oral)

photo2cartoon - 人像卡通化探索项目 (photo-to-cartoon translation project)

edge-connect - EdgeConnect: Structure Guided Image Inpainting using Edge Prediction, ICCV 2019 https://arxiv.org/abs/1901.00212

Anime-Generation - 🎨 Anime generation with GANs.