iNeural
armnn
iNeural | armnn | |
---|---|---|
4 | 2 | |
5 | 1,120 | |
- | 1.6% | |
0.0 | 9.2 | |
over 1 year ago | 2 days ago | |
C++ | C++ | |
GNU Affero General Public License v3.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
iNeural
-
[Hobby] I Need a Friendly Team. Your Experience Doesn't Matter!
iNeural (Creating Artificial Neural Networks) : https://github.com/fkkarakurt/iNeural
- iNeural
-
iNeural : Update (8.12.21)
git clone https://github.com/fkkarakurt/iNeural.git
-
iNeural : A library for creating Artificial Neural Networks
I have passed the first stage of this artificial neural network library, where I use the Eigen 3 library extensively. I continue to develop the project. You can check it out on my GitHub account.
armnn
-
LeCun: Qualcomm working with Meta to run Llama-2 on mobile devices
Like ARM? https://github.com/ARM-software/armnn
Optimization for this workload has arguably been in-progress for decades. Modern AVX instructions can be found in laptops that are a decade old now, and most big inferencing projects are built around SIMD or GPU shaders. Unless your computer ships with onboard Nvidia hardware, there's usually not much difference in inferencing performance.
-
Apple previews Live Speech, Personal Voice, and more new accessibility features
Yes, and generic multicore ARM CPUs can run ARM's standard compute library regardless of their hardware: https://github.com/ARM-software/armnn
Plus, the benchmark you've linked to is comparing CPU accelerated code to the notoriously crippled MKL execution. A more appropriate comparison would test Apple's AMX units against the Ryzen's SIMD-optimized inferencing.
What are some alternatives?
fann - Official github repository for Fast Artificial Neural Network Library (FANN)
gemm-benchmark - Simple [sd]gemm benchmark, similar to ACES dgemm
GPBoost - Combining tree-boosting with Gaussian process and mixed effects models
llama - Inference code for Llama models
command - Command, ::process::Command like syscalls in C++.
piper - A fast, local neural text to speech system
Seayon - Open source Neural Network library in C++
DeepSpeech - DeepSpeech is an open source embedded (offline, on-device) speech-to-text engine which can run in real time on devices ranging from a Raspberry Pi 4 to high power GPU servers.
Nerve - This is a basic implementation of a neural network for use in C and C++ programs. It is intended for use in applications that just happen to need a simple neural network and do not want to use needlessly complex neural network libraries.
llama.cpp - LLM inference in C/C++
frugally-deep - Header-only library for using Keras (TensorFlow) models in C++.
serge - A web interface for chatting with Alpaca through llama.cpp. Fully dockerized, with an easy to use API.