spleeter
stylegan2-ada-pytorch
Our great sponsors
spleeter | stylegan2-ada-pytorch | |
---|---|---|
230 | 30 | |
24,839 | 3,901 | |
1.2% | 1.5% | |
1.5 | 2.3 | |
about 1 month ago | 3 months ago | |
Python | Python | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
spleeter
-
Show HN: Free AI-based music demixing in the browser
I tried to use it but I had some issues as others in the thread.
I have tried many sources and method over the years and settled on spleeter [0]. Works well even for 10+ minute songs, varying styles from flamenco to heavy metal.
-
Are there any websites or programs that can separate vocals and drums from samples?
Chopped from their website Simple Stems is a quick and easy way to decompose any audio into it’s constituent parts. The plugin uses the well established Spleeter algorithm by Deezer to deconstruct songs into 2, 4 or 5 stems. The results are stunning, though more complicated mixes and live recordings are not always perfectly decomposed.
-
Ask HN: Is there an ML model that can go from an audio song to sheet music?
I was going to post basic pitch from Spotify but it looks like billconan beat me to it. That said I can give you a bit more advice. The Spotify basic pitch model isn't too good at multi-track input. It's capable of it, but you may actually get better results if you separate out the tracks first and then run them individually through the basic pitch model.
In order to do this you can use a source/stem separation model like spleeter (https://github.com/deezer/spleeter) and then run the basic pitch model (or any other midi transcription model). There's other you can try which may yield better results, for example: (https://github.com/Music-and-Culture-Technology-Lab/omnizart)
Either way the key words you want to be looking for are "midi transcription" and "stem separation", should help you find more models to try for both steps. Good luck! :)
-
Anyone here have experience writing VST audio plugins in C++, or 'wrapping'/converting a VST to an AU plug-in?
I'm chasing my white whale, which is to create a real-time version of the audio stem separation tool 'Spleeter' that I've been using for a few years now to remove instruments like drums/bass guitar from existing music so that I can play along at home.
-
Separate soundtrack from voices
I use Spleeter.
- [Audio Engineering] Le dissolvant vocal ultime est \ "Holy Sh * t \" niveau bon
- Any self hosted vocal removal utility.
-
Ultimate Vocal Remover is "holy sh*t" level good
Some of you have probably heard of spleeter, a machine learning program developed by Deezer that isolates instruments. It was pretty good, but it had some obvious weaknesses. But what if I told you that there's something even better? Ultimate Vocal Remover is so good I audibly said "holy sh*t" when I listened to what it produced. It recently released a full-band model (UVR-MDX-NET Inst HQ 1), unlike spleeter which has an 11kHz cutoff.
-
Drumsthesia - a simple software that helps you to learn how to play the drums
I'm actually planning on using something like spleeter to create drumless tracks from youtube videos on a website where people can share the pre-synced music sheets and audio.
-
I have questions about producing covers on Synth V
2) It's possible to separate vocals (and other stems) using software like Demucs, LALAL.AI, etc... While it isn't a perfect solution, the technology for this is getting better and better. I originally used Spleeter but now exclusively use Demucs (v4) since the results are highly impressive. If you're not comfortable setting up a python environment, there are some websites like AudioStrip that allow you to try it online with limitations.
stylegan2-ada-pytorch
-
Samsung expected to report 80% profit plunge as losses mount at chip business
> there is really nothing that "normal" AI requires that is bound to CUDA. pyTorch and Tensorflow are backend agnostic (ideally...).
There are a lot of optimizations that CUDA has that are nowhere near supported in other software or even hardware. Custom cuda kernels also aren't as rare as one might think, they will often just be hidden unless you're looking at libraries. Our more well known example is going to be StyleGAN[0] but it isn't uncommon to see elsewhere, even in research code. Swin even has a cuda kernel[1]. Or find torch here[1] (which github reports that 4% of the code is cuda (and 42% C++ and 2% C)). These things are everywhere. I don't think pytorch and tensorflow could ever be agnostic, there will always be a difference just because you have to spend resources differently (developing kernels is time resource). We can draw evidence by looking at Intel MKL, which is still better than open source libraries and has been so for a long time.
I really do want AMD to compete in this space. I'd even love a third player like Intel. We really do need competition here, but it would be naive to think that there's going to be a quick catchup here. AMD has a lot of work to do and posting a few bounties and starting a company (idk, called "micro grad"?) isn't going to solve the problem anytime soon.
And fwiw, I'm willing to bet that most AI companies would rather run in house servers than from cloud service providers. The truth is that right now just publishing is extremely correlated to compute infrastructure (doesn't need to be but with all the noise we've just said "fuck the poor" because rejecting is easy) and anyone building products has costly infrastructure.
[0] https://github.com/NVlabs/stylegan2-ada-pytorch/blob/d72cc7d...
[1] https://github.com/microsoft/Swin-Transformer/blob/2cb103f2d...
-
[P] Frechet Inception Distance
One irritating flaw with FID is that scores are massively biased by the number of samples, that is, the fewer samples you use, the larger the score. So to make comparisons fair it's absolutely crucial to use the same number of samples. From what I've seen on standard benchmarks it's pretty common now to compute Inception features for every single data point, but only for 50k samples from generative models (for reference off the top of my head StyleGAN2-ADA does this, see Appendix A).
-
City Does Not Exist
First, you have to collect a few thousand images of the same thing (maybe more or less depending on how complex your thing is or how good the results should be). Then, you train a generative adversarial neural network on those images to generate new images. https://github.com/NVlabs/stylegan2-ada-pytorch works quite well. https://github.com/NVlabs/stylegan3 is supposedly even better, but I did not try it yet.
- Modern Propaganda (this person does not exist)
-
This Bot Crime Did Not Occur
I used a modified version of this repo, and there's also the official NVIDIA implementation, though neither have official notebooks. You can Google 'StyleGAN2 ADA Colab' and find a few starting points that way, but wait a few hours and I can clean up my notebook and post it here!
-
[P] Suggest a Conditional GAN for a project?
Consider this repo: https://github.com/NVlabs/stylegan2-ada-pytorch. It is quite well documented and has conditions built-in. I have worked with this code recently and it is easy to make your own modifications, so if you don’t shy away from doing some minor work yourself, I imagine you could make quantitative conditions work with a few changes to the input of the mapping network.
-
[OC] This NPC Does Not Exist: I created an AI to generate NPC portraits
Both tools rely on a stylegan2 encoder which was finetuned using a set of drawn portraits I've been collecting for some time.
-
[D] What is the smallest dataset you styleGAN2 trained?
Authors of the StyleGAN2-ada already try many things in this paper, I suggest you check it out: https://arxiv.org/abs/2006.06676. You can just check sections 4.2 and 4.3.
-
Stylegan2-ada x lucid sonic dreams x animal eyes
This video was created with following repository’s stylegan2-ada-pytorch, lucid-sonic-dreams and spleeter
-
So here is what an AI thinks Naruto would look like in real life
I do in the description of the video! But to make your life easier: StyleGAN2-Ada source code: https://github.com/NVlabs/stylegan2-ada-pytorch Pixel2Style2Pixel source code: https://github.com/eladrich/pixel2style2pixel
What are some alternatives?
ultimatevocalremovergui - GUI for a Vocal Remover that uses Deep Neural Networks.
open-unmix-pytorch - Open-Unmix - Music Source Separation for PyTorch
demucs - Code for the paper Hybrid Spectrogram and Waveform Source Separation, but the goddamm motherfucker doesn't work.
stylegan3 - Official PyTorch implementation of StyleGAN3
SpleeterGui - Windows desktop front end for Spleeter - AI source separation
pixel2style2pixel - Official Implementation for "Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation" (CVPR 2021) presenting the pixel2style2pixel (pSp) framework
BigGAN-PyTorch - The author's officially unofficial PyTorch BigGAN implementation.
SpleetGUI - Spleeter GUI version
spleeter-web - Self-hostable web app for isolating the vocal, accompaniment, bass, and drums of any song. Supports Spleeter, D3Net, Demucs, Tasnet, X-UMX. Built with React and Django.
StyleFlow - StyleFlow: Attribute-conditioned Exploration of StyleGAN-generated Images using Conditional Continuous Normalizing Flows (ACM TOG 2021)
nodejs-poolController - An application to control pool equipment from various manufacturers.
youtube-dl-gui - A cross-platform GUI for youtube-dl made in Electron and node.js