iOS-Runtime-Headers
whisper.cpp
iOS-Runtime-Headers | whisper.cpp | |
---|---|---|
2 | 187 | |
7,923 | 31,649 | |
- | - | |
10.0 | 9.8 | |
almost 2 years ago | 4 days ago | |
Objective-C | C | |
- | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
iOS-Runtime-Headers
-
Android Devices with Backdoored Firmware Found in US Schools
Sure, but private methods are another vector - tracking and bypassing the IDFA and potentially acting as official Apple Apps to use/abuse things like Carrier/SIM info[0], updating the wallpaper for the user[1], accessing call history[2], etc.
0: https://github.com/nst/iOS-Runtime-Headers/blob/fbb634c78269...
1: https://github.com/nst/iOS-Runtime-Headers/issues/32
2: https://github.com/nst/iOS-Runtime-Headers/tree/fbb634c78269...
-
Everything we know about the Apple Neural Engine (ANE)
My question too. This semi-answer on the page seems to contradict itself (source: https://github.com/hollance/neural-engine/blob/master/docs/p... ):
"> Can I program the ANE directly?
Unfortunately not. You can only use the Neural Engine through Core ML at the moment.
There currently is no public framework for programming the ANE. There are several private, undocumented frameworks but obviously we cannot use them as Apple rejects apps that use private frameworks.
(Perhaps in the future Apple will provide a public version of AppleNeuralEngine.framework.)"
The last part links to this bunch of headers:
https://github.com/nst/iOS-Runtime-Headers/tree/master/Priva...
So might it be more accurate to say you can program it directly, but won't end up with something that can be distributed on the app store?
whisper.cpp
-
Show HN: I created automatic subtitling app to boost short videos
whisper.cpp [1] has a karaoke example that uses ffmpeg's drawtext filter to display rudimentary karaoke-like captions. It also supports diarisation. Perhaps it could be a starting point to create a better script that does what you need.
--
1: https://github.com/ggerganov/whisper.cpp/blob/master/README....
- LLaMA Now Goes Faster on CPUs
-
LLMs on your local Computer (Part 1)
The ggml library is one of the first library for local LLM interference. Itβs a pure C library that converts models to run on several devices, including desktops, laptops, and even mobile device - and therefore, it can also be considered as a tinkering tool, trying new optimizations, that will then be incorporated into other downstream projects. This tool is at the heart of several other projects, powering LLM interference on desktop or even mobile phones. Subprojects for running specific LLMs or LLM families exists, such as whisper.cpp.
-
Voxos.ai β An Open-Source Desktop Voice Assistant
I'm not sure if it is _fully_ openai compatible, but whispercpp has a server bundled that says it is "OAI-like": https://github.com/ggerganov/whisper.cpp/tree/master/example...
I don't have any direct experience with it... I've only played around with whisper locally, using scripts.
-
Jarvis: A Voice Virtual Assistant in Python (OpenAI, ElevenLabs, Deepgram)
unless i'm misunderstanding `whisper.cpp` seems to support streaming & the repository includes a native example[0] and a WASM example[1] with a demo site[2].
[0]: https://github.com/ggerganov/whisper.cpp/tree/master/example...
- Wchess
-
I've open sourced my Flutter plugin to run on-device LLMs on any platform. TestFlight builds available now.
Usage 1: Good to transcribe audio. An example use case could be to summarize YouTube videos or long courses. Usage 2: You talk with voice to your AI that responds with text (later with audio too). - https://github.com/ggerganov/whisper.cpp
-
Scrybble is the ReMarkable highlights to Obsidian exporter I have been looking for
π£οΈποΈ whisper.cpp (offline speech-to-text transcription, models trained by OpenAI, CLI based, browser based)
- Whisper.wasm
-
Whisper C++ not working for me. Anyone else?
Has anyone played around with Whisper C++ for swift? I'm hitting a snag even on the demo. I've downloaded the github repo and everything matches up with this video [ https://youtu.be/b10OHCDHDQ4 ] but when he hits the transcribe button, it actually prints out the captioning. When I do it, it skips that part and just says "Done...". But it, does everything else - plays the audio, says it's transcribing.. just doesn't show me the transcription: and it's not in the debug window either. But the demo isn't throwing any errors, and I haven't messed with the code really so this is their example. https://github.com/ggerganov/whisper.cpp
What are some alternatives?
neural-engine - Everything we actually know about the Apple Neural Engine (ANE)
faster-whisper - Faster Whisper transcription with CTranslate2
ane - Reverse engineered Linux driver for the Apple Neural Engine (ANE).
bark - π Text-Prompted Generative Audio Model
m1n1 - A bootloader and experimentation playground for Apple Silicon
Whisper - High-performance GPGPU inference of OpenAI's Whisper automatic speech recognition (ASR) model
ml-ane-transformers - Reference implementation of the Transformer architecture optimized for Apple Neural Engine (ANE)
whisper - Robust Speech Recognition via Large-Scale Weak Supervision
tinygrad - You like pytorch? You like micrograd? You love tinygrad! β€οΈ [Moved to: https://github.com/tinygrad/tinygrad]
whisperX - WhisperX: Automatic Speech Recognition with Word-level Timestamps (& Diarization)
llama.cpp - LLM inference in C/C++
NeMo - A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)