plaidml
NeMo
Our great sponsors
plaidml | NeMo | |
---|---|---|
14 | 29 | |
4,575 | 10,084 | |
0.1% | 7.1% | |
5.4 | 9.8 | |
9 months ago | 2 days ago | |
C++ | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
plaidml
-
We’re Brian Retford, Jason Morton, and Ryan Cao, various researchers and developers in the ZKML (zero knowledge machine learning) space and we’ve been asked by r/privacy mods to help explain and answer questions about ZKML and why it’s important for the future of data privacy! AMA
basically agree with all of this, however I do want to highlight that there is no 'ZKML protocol plan' - the panel here are all involved in quite different projects and interested in ZKML for a variety of reasons. As one of the authors of https://github.com/plaidml/plaidml I'm not expecting any kind of standard protocol to evolve for several years; the group behind the AMA though is optimistic about the potential of ZKML and this AMA is part of the start of developing useful protocols.
-
Whisper – open source speech recognition by OpenAI
It understands my Swedish attempts at English really well with the medium.en model. (Although, it gives me a funny warning: `UserWarning: medium.en is an English-only model but receipted 'English'; using English instead.`. I guess it doesn't want to be told to use English when that's all it can do.)
However, it runs very slowly. It uses the CPU on my macbook, presumably because it hasn't got a NVidia card.
Googling about that I found [plaidML](https://github.com/plaidml/plaidml) which is a project promising to run ML on many different gpu architectures. Does anyone know whether it is possible to plug them together somehow? I am not an ML researcher, and don't quite understand anything about the technical details of the domain, but I can understand and write python code in domains that I do understand, so I could do some glue work if required.
-
Cloud Based training for my model?
Have you tried PlaidML https://github.com/plaidml/plaidml
-
GPU computing on Apple Silicon
This doesn't answer your question, but it would be cool if we had something based on MLIR for GPU compute. From what I've read, it closes the gap between NVIDIA and other GPU vendors a lot more than pure compute shaders. e.g. ONNX-MLIR, PlaidML, and IREE.
-
Image processing library? Also GUI development recommendations?
There is a library called PlaidML which is supposed to support Keras on a wide variety of GPUs, including the Iris. But it doesn't. I get the issue reported as Issue #168, which was first reported in 2018 and is still open. That's what I mean by not well supported.
- Question about the viability of AMD GPUs
- Ask HN: Will there ever be a cross platform GPU interface?
-
[P] DLPrimitives - wondering about best development direction
Not really: https://github.com/plaidml/plaidml/commits/plaidml-v1
-
Adventures in homelab AI: Putting the torch to an R710
There are reports on github of plaidML conking out on older CPUs with a similar "illegal instruction err.
- Machine learning on a new amd radeon gpu?
NeMo
-
[P] Making a TTS voice, HK-47 from Kotor using Tortoise (Ideally WaveRNN)
I don't test WaveRNN but from the ones that I know the best that is open source is FastPitch. And it's easy to use, here is the tutorial for voice cloning.
- [N] Huggingface/nvidia release open source GPT-2B trained on 1.1T tokens
- [D] What is the best open source text to speech model?
-
[D] JAX vs PyTorch in 2023
Nowadays... bigger repos like https://github.com/NVIDIA/NeMo are all pytorch, lots of work also published by Meta and Microsoft is all torch. I check new work on GitHub all the time and I haven't seen a Tensorflow repo in years except one.
-
[D] What's stopping you from working on speech and voice?
- https://github.com/NVIDIA/NeMo
-
Can I use PyTorch to build a fast capitalization recoverer?
Can’t you use the NeMo model and just strip the punctuation from the output again if you don’t want it? You can also fine tune the the model with capitalization only if you look at the examples https://github.com/NVIDIA/NeMo/blob/stable/tutorials/nlp/Punctuation_and_Capitalization.ipynb The capitalization and punctuation are annotated separately (U indicates that the word should be upper cased, and O - no capitalization ). The model seems to be a token level classifier not seq to seq so there should also be a way to get just the capitalization part but you would have to look into the model as it’s not shown in the examples.
-
I made a free transcription service powered by Whisper AI
I think there's been talk to do speaker diarization with whisper-asr-webservice[0] which is also written in python and should be able to make use of goodies such as pyannote-audio, py-webrtcvad, etc.
Whisper is great but at the point we get to kludging various things together it starts to make more sense to use something like Nvidia NeMo[1] which was built with all of this in mind and more
[0] - https://github.com/ahmetoner/whisper-asr-webservice
[1] - https://github.com/NVIDIA/NeMo
-
Mozilla Common Voice - Korean Language is live - Help Build a Korean Corpus for Training AI/Navi/etc
[커먼보이스 전자우편](mailto:[email protected]) || Common Voice || Korean Language Homepage || FAQs || Speaking Aloud and Reviewing Recordings || Sentence Collector || NVidia/NeMo
- Whisper – open source speech recognition by OpenAI
-
Using Edge Biometrics For Better AI Security System Development
The final security grain was added with speech-to-text anti-spoofing built on QuartzNet from the Nemo framework. This model provides a decent quality user experience and is suitable for real-time scenarios. To measure how close what the person says to what the system expects, requires calculation of the Levenshtein distance between them.
What are some alternatives?
tensorflow-opencl - OpenCL support for TensorFlow
pyannote-audio - Neural building blocks for speaker diarization: speech activity detection, speaker change detection, overlapped speech detection, speaker embedding
ROCm - AMD ROCm™ Software - GitHub Home [Moved to: https://github.com/ROCm/ROCm]
DeepSpeech - DeepSpeech is an open source embedded (offline, on-device) speech-to-text engine which can run in real time on devices ranging from a Raspberry Pi 4 to high power GPU servers.
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration
whisper - Robust Speech Recognition via Large-Scale Weak Supervision
pytorch-coriander - OpenCL build of pytorch - (in-progress, not useable)
espnet - End-to-End Speech Processing Toolkit
onnx-mlir - Representation and Reference Lowering of ONNX Models in MLIR Compiler Infrastructure
Real-Time-Voice-Cloning - Clone a voice in 5 seconds to generate arbitrary speech in real-time
dlprimitives - Deep Learning Primitives and Mini-Framework for OpenCL
TTS - 🐸💬 - a deep learning toolkit for Text-to-Speech, battle-tested in research and production