free-music-demixer
khoj
free-music-demixer | khoj | |
---|---|---|
7 | 50 | |
323 | 4,858 | |
- | 4.2% | |
8.0 | 9.9 | |
about 1 month ago | 7 days ago | |
Python | Python | |
MIT License | GNU Affero General Public License v3.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
free-music-demixer
- Ask HN: What are some of the best user experiences with AI?
-
Free-music-demixer adds multi-threading to run Demucs faster in the browser
Hi HN,
Over the Christmas break I added multi-threading to the WASM Demucs module in freemusicdemixer
Demucs (v4 hybrid transformer) is a much higher quality model than the previous default, but it ran very slowly when limited to one worker: ~17 minutes for an average 4-minute song
I have since implemented multi-threading with WebWorkers.
If you raise the "MAX MEMORY" setting to 16 GB or 32 GB, your track will demix within 7-5 minutes, producing state-of-the-art results.
There is also support for the Demucs 6-source model which adds piano and guitar stems.
Please reach out and be loud about any bugs or UX issues you encounter!: https://github.com/sevagh/free-music-demixer/issues
- Show HN: Improved freemusicdemixer – AI music demixing in the browser
- Show HN: Improved freemusicdemixer (AI music demixing in the browser)
- FLaNK Stack Weekly for 17 July 2023
-
Show HN: Free AI-based music demixing in the browser
* Post-processing step (bigger impact)
I tried to tackle the post-processing step in my C++ code (which would win ~1 dB in quality across all targets) but it's too tricky for now [2]. Maybe some other day.
1: https://github.com/sevagh/free-music-demixer/blob/main/examp...
2: https://github.com/sigsep/open-unmix-pytorch/blob/master/ope...
khoj
-
Show HN: I made an app to use local AI as daily driver
There are already several RAG chat open source solutions available. Two that immediately come to mind are:
Danswer
https://github.com/danswer-ai/danswer
Khoj
https://github.com/khoj-ai/khoj
-
Ask HN: How do I train a custom LLM/ChatGPT on my own documents in Dec 2023?
I'm a fan of Khoj. Been using it for months. https://github.com/khoj-ai/khoj
-
You probably don’t need to fine-tune LLMs
https://github.com/khoj-ai/khoj
This is the easiest I found, on here too.
-
Show HN: Khoj – Chat Offline with Your Second Brain Using Llama 2
Thanks for the feedback. Does your machine have a GPU? 32GB CPU RAM should be enough but GPU speeds up response time.
We have fixes for the seg fault[1] and improvement to the query speed[2] that should be released by end of day today[3].
Update khoj to version 0.10.1 with pip install --upgrade khoj-assistant to see if that improves your experience.
The number of documents/pages/entries doesn't scale memory utilization as quickly and doesn't affect the search, chat response time as much
[1]: The seg fault would occur when folks sent multiple chat queries at the same time. A lock and some UX improvements fixed that
[2]: The query time improvements are done by increasing batch size, to trade-off increased memory utilization for more speed
[3]: The relevant pull request for reference: https://github.com/khoj-ai/khoj/pull/393
-
A Review: Using Llama 2 to Chat with Notes on Consumer Hardware
We recently integrated Llama 2 into Khoj. I wanted to share a short real-world evaluation of using Llama 2 for the chat with docs use-cases and hear which models have worked best for you all. The standard benchmarks (ARC, HellaSwag, MMLU etc.) are not tuned for evaluating this
- FLaNK Stack Weekly for 17 July 2023
-
An open source AI search + chat assistant for your Notion workspace
Self-host your Notion assistant using the instructions here. You'll need Python >= 3.8 to get started.
-
When will we get JARVIS?
Here's an early example: https://github.com/khoj-ai/khoj
What are some alternatives?
danswer - Gen-AI Chat for Teams - Think ChatGPT if it had access to your team's unique knowledge.
obsidian-smart-connections - Chat with your notes & see links to related content with AI embeddings. Use local models or 100+ via APIs like Claude, Gemini, ChatGPT & Llama 3
open-unmix-pytorch - Open-Unmix - Music Source Separation for PyTorch
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
heimdall - Dashboard for operating Flink jobs and deployments.
qdrant - Qdrant - High-performance, massive-scale Vector Database for the next generation of AI. Also available in the cloud https://cloud.qdrant.io/
dt - dt - duct tape for your unix pipes
llama-cpp-python - Python bindings for llama.cpp
video2dataset - Easily create large video dataset from video urls
obsidian-ava - Quickly format your notes with ChatGPT in Obsidian
plate - The rich-text editor for React.
logseq-plugin-gpt3-openai - A plugin for GPT-3 AI assisted note taking in Logseq