iOS-Runtime-Headers
neural-engine
iOS-Runtime-Headers | neural-engine | |
---|---|---|
2 | 22 | |
7,923 | 1,884 | |
- | - | |
10.0 | 5.1 | |
almost 2 years ago | about 2 months ago | |
Objective-C | ||
- | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
iOS-Runtime-Headers
-
Android Devices with Backdoored Firmware Found in US Schools
Sure, but private methods are another vector - tracking and bypassing the IDFA and potentially acting as official Apple Apps to use/abuse things like Carrier/SIM info[0], updating the wallpaper for the user[1], accessing call history[2], etc.
0: https://github.com/nst/iOS-Runtime-Headers/blob/fbb634c78269...
1: https://github.com/nst/iOS-Runtime-Headers/issues/32
2: https://github.com/nst/iOS-Runtime-Headers/tree/fbb634c78269...
-
Everything we know about the Apple Neural Engine (ANE)
My question too. This semi-answer on the page seems to contradict itself (source: https://github.com/hollance/neural-engine/blob/master/docs/p... ):
"> Can I program the ANE directly?
Unfortunately not. You can only use the Neural Engine through Core ML at the moment.
There currently is no public framework for programming the ANE. There are several private, undocumented frameworks but obviously we cannot use them as Apple rejects apps that use private frameworks.
(Perhaps in the future Apple will provide a public version of AppleNeuralEngine.framework.)"
The last part links to this bunch of headers:
https://github.com/nst/iOS-Runtime-Headers/tree/master/Priva...
So might it be more accurate to say you can program it directly, but won't end up with something that can be distributed on the app store?
neural-engine
-
Apple Introduces M4 Chip
~38 TOPS at fp16 is amazing, if the quoted number if fp16 (ANE is fp16 according to this [1] but that honestly seems like a bad choice when people are going smaller and smaller even at the higher level datacenter cards so not sure why apple would use it instead of fp8 natively)
[1]: https://github.com/hollance/neural-engine/blob/master/docs/1...
-
Optimize sgemm on RISC-V platform
yep. they have a neural engine that is separate from the CPU and GPU that does really fast matmuls https://github.com/hollance/neural-engine. it's basically completely undocumented.
-
Apple is adding more and more neural engine cores to their products, is there any way to use them for local LLMs?
Looks like the ANE ("Apple Neural Engine") cores are powerful but not as flexible/programmable as the GPU cores. There is no sign that LLM inference is possible with them or ever will be unless Apple either opens up the closed ANE software framework for extensibility or they extend the ANE framework to support modern LLMs themselves. I would not hold my breath.
-
Anthropic’s $5B, 4-year plan to take on OpenAI
If Apple would wake up to what's happening with llama.cpp etc then I don't see such a big role for paying for remote access to big models via API
Currently a Macbook has a Neural Engine that is sitting idle 99% of the time and only suitable for running limited models (poorly documented, opaque rules about what ops can be accelerated, a black box compiler [1] and an apparent 3GB model size limit [2])
OTOH you can buy a Macbook with 64GB 'unified' memory and a Neural Engine today
If you squint a bit and look into the near future it's not so hard to imagine a future Mx chip with a more capable Neural Engine and yet more RAM, and able to run the largest GPT3 class models locally. (Ideally with better developer tools so other compilers can target the NE)
And then imagine it does that while leaving the CPU+GPU mostly free to run apps/games ... the whole experience of using a computer could change radically in that case.
I find it hard not to think this is coming within 5 years (although equally, I can imagine this is not on Apple's roadmap at all currently)
[1] https://github.com/hollance/neural-engine
- Everything we actually know about the Apple Neural Engine (ANE)
- What we know about the Apple Neural Engine
-
Everything we know about the Apple Neural Engine (ANE)
My question too. This semi-answer on the page seems to contradict itself (source: https://github.com/hollance/neural-engine/blob/master/docs/p... ):
"> Can I program the ANE directly?
Unfortunately not. You can only use the Neural Engine through Core ML at the moment.
There currently is no public framework for programming the ANE. There are several private, undocumented frameworks but obviously we cannot use them as Apple rejects apps that use private frameworks.
(Perhaps in the future Apple will provide a public version of AppleNeuralEngine.framework.)"
The last part links to this bunch of headers:
https://github.com/nst/iOS-Runtime-Headers/tree/master/Priva...
So might it be more accurate to say you can program it directly, but won't end up with something that can be distributed on the app store?
What are some alternatives?
ane - Reverse engineered Linux driver for the Apple Neural Engine (ANE).
Dual-Edge-TPU-Adapter - Dual Edge TPU Adapter to use it on a system with single PCIe port on m.2 A/B/E/M slot
m1n1 - A bootloader and experimentation playground for Apple Silicon
pyllms - Minimal Python library to connect to LLMs (OpenAI, Anthropic, AI21, Cohere, Aleph Alpha, HuggingfaceHub, Google PaLM2, with a built-in model performance benchmark.
ml-ane-transformers - Reference implementation of the Transformer architecture optimized for Apple Neural Engine (ANE)
ANECompat - A tool which checks compatibility of CoreML model with Apple Neural Engine
tinygrad - You like pytorch? You like micrograd? You love tinygrad! ❤️ [Moved to: https://github.com/tinygrad/tinygrad]
pytorch-apple-silicon-benchmarks - Performance of PyTorch on Apple Silicon
whisper.cpp - Port of OpenAI's Whisper model in C/C++
tensorexperiments - Boilerplate for GPU-Accelerated TensorFlow and PyTorch code on M1 Macbook
more-ane-transformers - Run transformers (incl. LLMs) on the Apple Neural Engine.
cnn-benchmarks - Benchmarks for popular CNN models