neural-engine
more-ane-transformers
neural-engine | more-ane-transformers | |
---|---|---|
22 | 4 | |
1,884 | 35 | |
- | - | |
5.1 | 7.0 | |
about 2 months ago | 6 months ago | |
Python | ||
MIT License | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
neural-engine
-
Apple Introduces M4 Chip
~38 TOPS at fp16 is amazing, if the quoted number if fp16 (ANE is fp16 according to this [1] but that honestly seems like a bad choice when people are going smaller and smaller even at the higher level datacenter cards so not sure why apple would use it instead of fp8 natively)
[1]: https://github.com/hollance/neural-engine/blob/master/docs/1...
-
Optimize sgemm on RISC-V platform
yep. they have a neural engine that is separate from the CPU and GPU that does really fast matmuls https://github.com/hollance/neural-engine. it's basically completely undocumented.
-
Apple is adding more and more neural engine cores to their products, is there any way to use them for local LLMs?
Looks like the ANE ("Apple Neural Engine") cores are powerful but not as flexible/programmable as the GPU cores. There is no sign that LLM inference is possible with them or ever will be unless Apple either opens up the closed ANE software framework for extensibility or they extend the ANE framework to support modern LLMs themselves. I would not hold my breath.
-
Anthropic’s $5B, 4-year plan to take on OpenAI
If Apple would wake up to what's happening with llama.cpp etc then I don't see such a big role for paying for remote access to big models via API
Currently a Macbook has a Neural Engine that is sitting idle 99% of the time and only suitable for running limited models (poorly documented, opaque rules about what ops can be accelerated, a black box compiler [1] and an apparent 3GB model size limit [2])
OTOH you can buy a Macbook with 64GB 'unified' memory and a Neural Engine today
If you squint a bit and look into the near future it's not so hard to imagine a future Mx chip with a more capable Neural Engine and yet more RAM, and able to run the largest GPT3 class models locally. (Ideally with better developer tools so other compilers can target the NE)
And then imagine it does that while leaving the CPU+GPU mostly free to run apps/games ... the whole experience of using a computer could change radically in that case.
I find it hard not to think this is coming within 5 years (although equally, I can imagine this is not on Apple's roadmap at all currently)
[1] https://github.com/hollance/neural-engine
- Everything we actually know about the Apple Neural Engine (ANE)
- What we know about the Apple Neural Engine
-
Everything we know about the Apple Neural Engine (ANE)
My question too. This semi-answer on the page seems to contradict itself (source: https://github.com/hollance/neural-engine/blob/master/docs/p... ):
"> Can I program the ANE directly?
Unfortunately not. You can only use the Neural Engine through Core ML at the moment.
There currently is no public framework for programming the ANE. There are several private, undocumented frameworks but obviously we cannot use them as Apple rejects apps that use private frameworks.
(Perhaps in the future Apple will provide a public version of AppleNeuralEngine.framework.)"
The last part links to this bunch of headers:
https://github.com/nst/iOS-Runtime-Headers/tree/master/Priva...
So might it be more accurate to say you can program it directly, but won't end up with something that can be distributed on the app store?
more-ane-transformers
- M2 Ultra can run 128 streams of Llama 2 7B in parallel
- Is it possible to use ANE(Apple Neural Engine) to run those models?
-
The Coming of Local LLMs
Apple should get working on a version of the Neural Engine that is useful for these models, and remove the 3GB size limit [1] to take full advantage of the 'unified' memory architecture. Game changer.
Waste of die space currently
[1] https://github.com/smpanaro/more-ane-transformers/blob/main/...
- Anthropic’s $5B, 4-year plan to take on OpenAI
What are some alternatives?
Dual-Edge-TPU-Adapter - Dual Edge TPU Adapter to use it on a system with single PCIe port on m.2 A/B/E/M slot
pyllms - Minimal Python library to connect to LLMs (OpenAI, Anthropic, AI21, Cohere, Aleph Alpha, HuggingfaceHub, Google PaLM2, with a built-in model performance benchmark.
whisper.coreml - Robust Speech Recognition via Large-Scale Weak Supervision
ANECompat - A tool which checks compatibility of CoreML model with Apple Neural Engine
tinygrad - You like pytorch? You like micrograd? You love tinygrad! ❤️ [Moved to: https://github.com/tinygrad/tinygrad]
pytorch-apple-silicon-benchmarks - Performance of PyTorch on Apple Silicon
duckduckgo-locales - Translation files for <a href="https://duckduckgo.com"> </a>
tensorexperiments - Boilerplate for GPU-Accelerated TensorFlow and PyTorch code on M1 Macbook
experiments-coreml-ane-distilbert - Experimenting with https://github.com/apple/ml-ane-transformers
cnn-benchmarks - Benchmarks for popular CNN models
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.