PaLM-flax
whisper-timestamped
PaLM-flax | whisper-timestamped | |
---|---|---|
1 | 2 | |
14 | 2,178 | |
- | 4.5% | |
4.2 | 6.7 | |
over 2 years ago | about 1 month ago | |
Python | Python | |
MIT License | GNU Affero General Public License v3.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
PaLM-flax
-
[R] Proprietary ML model in research paper
Google, Deepmind, and OpenAI normally provide a section in their research papers for replicating the pre-training and fine-tuning architectures of the models. For example, a replication of the pre-training architecture outlined in the LaMDA research paper in PyTorch (https://github.com/conceptofmind/LaMDA-pytorch/blob/main/lamda\_pytorch/lamda\_pytorch.py) or another implementation of Google's SOTA Pathways Language Model in JAX/FLAX (https://github.com/conceptofmind/PaLM-flax).
whisper-timestamped
-
Show HN: AI Dub Tool I Made to Watch Foreign Language Videos with My 7-Year-Old
Yes. But Whisper's word-level timings are actually quite inaccurate out of the box. There are some Python libraries that mitigate that. I tested several of them. whisper-timestamped seems to be the best one. [0]
[0] https://github.com/linto-ai/whisper-timestamped
-
AI-assisted removal of filler words from video recordings
whisper-timestamped, which is a layer on top of the Whisper set of models enabling us to get accurate word timestamps and include filler words in transcription output. This transcriber downloads the selected Whisper model to the machine running the demo and no third-party API keys are required.
What are some alternatives?
PaLM-pytorch - Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways
pyannote-whisper
DALLE-pytorch - Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch
wav2vec - pure numpy implementation of wav2vec 2.0
x-transformers - A concise but complete full-attention transformer with a set of promising experimental features from various papers
whisper-auto-transcribe - Auto transcribe tool based on whisper
RWKV-LM - RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable). We are at RWKV-7 "Goose". So it's combining the best of RNN and transformer - great performance, linear time, constant space (no kv-cache), fast training, infinite ctx_len, and free sentence embedding.
pywhisper - openai/whisper + extra features
soundstorm-pytorch - Implementation of SoundStorm, Efficient Parallel Audio Generation from Google Deepmind, in Pytorch
SincNet - SincNet is a neural architecture for efficiently processing raw audio samples.
transformer-smaller-training-vocab - Temporary remove unused tokens during training to save ram and speed.
SpeechBird - Speech Bird is a speech recognition system which makes complete hands-free computer control truly feasible, fast and accurate. Open-Source. Based on Windows Speech Recognition (WSR) and WSR Macros.