ssqueezepy
pytorch_wavelets
ssqueezepy | pytorch_wavelets | |
---|---|---|
3 | 1 | |
675 | 1,007 | |
1.2% | 4.6% | |
3.6 | 0.0 | |
8 days ago | over 1 year ago | |
Python | Python | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ssqueezepy
-
[OC] A lonely cough on a scalogram, yet rich in charasteristics & distinctive properties
Thanks for this great question - complex-valued Mexican hat mother wavelet here, used this implementation of synchrosqueezing: https://github.com/OverLordGoldDragon/ssqueezepy
-
[P] Fastest wavelet transforms in Python + synchrosqueezing
ssqueezepy 0.6.1 released w/ benchmarks, CWT up to x75 faster than PyWavelets on CPU and x900 on GPU (and more correct). STFT also CPU- and GPU-accelerated, and both synchrosqueezed.
-
[D] Inductive biases for audio spectrogram data.
Have a look here
pytorch_wavelets
-
How to create a docker environment for model use?
git clone https://github.com/fbcotter/pytorch_wavelets
What are some alternatives?
madmom - Python audio and music signal processing library
pywt - PyWavelets - Wavelet Transforms in Python
cog - Containers for machine learning
ruptures - ruptures: change point detection in Python
WaveDiff - Official Pytorch Implementation of the paper: Wavelet Diffusion Models are fast and scalable Image Generators (CVPR'23)
audio-reactive-led-strip - :musical_note: :rainbow: Real-time LED strip music visualization using Python and the ESP8266 or Raspberry Pi
mish-cuda - Mish Activation Function for PyTorch
kymatio - Wavelet scattering transforms in Python with GPU acceleration
mish-cuda - Mish Activation Function for PyTorch
Visual-Mic - When sound hits an object, it causes small vibrations on the object’s surface. Here we show how, using only high-speed video of the object, we can extract those minute vibrations and partially recover the sound that produced them, allowing us to turn everyday objects—a glass of water, a potted plant, a box of tissues, or a bag of chips—into visual microphones.