audio-visualizer-android
Panako
audio-visualizer-android | Panako | |
---|---|---|
1 | 2 | |
820 | 174 | |
- | - | |
0.0 | 4.0 | |
10 months ago | 5 months ago | |
Java | Java | |
Apache License 2.0 | GNU Affero General Public License v3.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
audio-visualizer-android
-
App crash after activity using audio-visualizer-android
The visual animation is the BlobVisualizer from https://github.com/gauravk95/audio-visualizer-android .
Panako
-
Show HN: Pyzam, Shazam for DJs and Mixtapes in Python
Hello, really glad to see project like this popping up. I have few questions as I was working on something similar few years ago:
1. I did some development myself for a "Track Discovery for Djs"[1] project in this space of "dj music recognition" and I am wondering how are you able to handle mixtapes and dj mixes when there is a significant element of sound manipulation/distortion applied, like pitch/tempo + various effects? In my tests this totally confused the algorithms which were not designed to handle such cases.
2. Can you share which algorithm have you implemented for this project? I did read most of the research papers in this space and my preferred solution was to build upon https://github.com/JorenSix/Panako which I did.
In the space of "minimal microhouse techno" type of genre where there are often similar rhythm patterns or even tracks build up using same sample packs it proved to be more difficult to have reliable results than not.
I was investigating how Spotify and other market leaders can do track recognition and they do train ML models on the same track which has applied 100+ various different effects...
Curious to hear your thoughts...
[1] - https://rominimal.club
-
Identification of all usages of OSTs in Made in Abyss (S1)
Using neural networks seems complicated, did you tried audio fingerprinting? I have been using this audio fingerprinting library to power this anime song synchronization script. You can check Panako and dejavu too.
What are some alternatives?
music-player - Music player project for android
pyacoustid - Python bindings for Chromaprint acoustic fingerprinting and the Acoustid Web service
Sonogram-Visible-Speech - A speech and sound anallysis tool.
stream-audio-fingerprint - Audio landmark fingerprinting in JavaScript
audiosource - :microphone: Use an Android device as a USB microphone
Modulo7 - A semantic and technical analysis of musical scores based on Information Retrieval Principles
processing-sound - Audio library for Processing built with JSyn
XR3Player - 🎧 🎼 The MOST ADVANCED JavaFX Media Player
Carbon - Material Design implementation for Android 4.0+. Shadows, ripples, vectors, fonts, animations, widgets, rounded corners and more.
dejavu - Audio fingerprinting and recognition in Python
sonic-sound-picture - Sonic Sound Picture (SSP) is a free, offline, and customizable music/audio visualizer software. With a range of templates to choose from, users can easily create stunning audio-visual experiences in just a few simple steps. SSP also allows users to create their own templates, giving them endless possibilities to bring their music to life.
Olaf - Olaf: Overly Lightweight Acoustic Fingerprinting is a portable acoustic fingerprinting system.