ncnn
Numba
Our great sponsors
ncnn | Numba | |
---|---|---|
12 | 124 | |
18,997 | 9,350 | |
1.6% | 1.7% | |
9.4 | 9.9 | |
9 days ago | 6 days ago | |
C++ | Python | |
GNU General Public License v3.0 or later | BSD 3-clause "New" or "Revised" License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ncnn
-
AMD Funded a Drop-In CUDA Implementation Built on ROCm: It's Open-Source
ncnn uses Vulkan for GPU acceleration, I've seen it used in a few projects to get AMD hardware support.
-
[D] Best way to package Pytorch models as a standalone application
They're using NCNN to package the model. Have a look. https://github.com/Tencent/NCNN
-
Realtime object detection android app
Hi. Here is my prefered android app for realtime objet detection: https://github.com/nihui/ncnn-android-nanodet ; https://github.com/Tencent/ncnn contains a lot of android demo app for a lot of models.
-
Esp32 tensorflow lite
ncnn home page: https://github.com/Tencent/ncnn
-
MMDeploy: Deploy All the Algorithms of OpenMMLab
ncnn
-
Draw Things, Stable Diffusion in your pocket, 100% offline and free
Yes, Android devices tend to have bigger RAMs, making running 1024x1024 possible (this is not possible at all on iPhones, which could peak around 5GiB memory with my current implementation, some serious engineering required to bring that down on iPhone devices). The problem is I am not sure about speed. I would likely switch to NCNN (https://github.com/Tencent/ncnn) as the backend which have a decent Vulkan computing kernel support. It is definitely a possibility and there is a path to do that.
- What’s New in TensorFlow 2.10?
-
[Technical Article] OCR Upgrade
As the leading open-source inference framework in China and in the world, what we like are its almost zero cost cross-platform capability, high inference speed, and minimal deployment volume. (Project address: https://github.com/Tencent/ncnn)
-
Is there a functioning neural netowork or backbone written in pure C language only?
If you’re not planning on training the neural net on an embedded device and just do inference, this might interest you: https://github.com/Tencent/ncnn
-
Deep Learning options on Radeon RX 6800
There's a Tencent-developed Open Source CNN library that runs on pretty much anything, as it's using Vulkan. It's called ncnn, you might want to take a look.
Numba
-
Mojo🔥: Head -to-Head with Python and Numba
Around the same time, I discovered Numba and was fascinated by how easily it could bring huge performance improvements to Python code.
-
Is anyone using PyPy for real work?
Simulations are, at least in my experience, numba’s [0] wheelhouse.
-
Python Algotrading with Machine Learning
A super-fast backtesting engine built in NumPy and accelerated with Numba.
-
PYTHON vs OCTAVE for Matlab alternative
Regarding speed, I don't agree this is a good argument against Python. For example, it seems no one here has yet mentioned numba, a Python JIT compiler. With a simple decorator you can compile a function to machine code with speeds on par with C. Numba also allows you to easily write cuda kernels for GPU computation. I've never had to drop down to writing C or C++ to write fast and performant Python code that does computationally demanding tasks thanks to numba.
-
Codon: Python Compiler
Just for reference,
* Nuitka[0] "is a Python compiler written in Python. It's fully compatible with Python 2.6, 2.7, 3.4, 3.5, 3.6, 3.7, 3.8, 3.9, 3.10, and 3.11."
* Pypy[1] "is a replacement for CPython" with builtin optimizations such as on the fly JIT compiles.
* Cython[2] "is an optimising static compiler for both the Python programming language and the extended Cython programming language... makes writing C extensions for Python as easy as Python itself."
* Numba[3] "is an open source JIT compiler that translates a subset of Python and NumPy code into fast machine code."
* Pyston[4] "is a performance-optimizing JIT for Python, and is drop-in compatible with ... CPython 3.8.12"
-
Two-tier programming language
Taichi (similar to numba) is a python library that allows you to write high speed code within python. So your program consists of slow python that gets interpreted regularly, and fast python (fully type annotated and restricted to a subset of the language) that gets parallellized and jitted for CPU or GPU. And you can mix the two within the same source file.
-
Been using Python for 3 years, never used a Class.
There are also just-in-time compilers available for some Python features, that compile those parts to machine code. That includes Numba (usable as a library within CPython) and Pypy (an alternative Python implementation that includes a JIT compiler to improve performance). There’s also Cython, which is a superset of Python that allows more directly interfacing with C and C++ functions, and compiling the resulting combined code.
-
Is there a language with lisp syntax but C semantics?
this was a submission from u/bpecsek and shows that lisp with sbcl can do quite well on bench-marking. but keep in mind that these sort of benchmarks can't tell you much about real world applications. moreover if you are really concerned about niche performance you need to start thinking about compilers. heck with an appropriate compiler even python can go wrooom
- [D] Yann LeCun's Hot Take about programming languages for ML
-
Python Developer Seeking Input: Is it Worth Learning Rust for FFI?
- if no purpose built libraries are faster, use numba (http://numba.pydata.org/) to speed up your code. Optionally you can also use Taichi (https://www.taichi-lang.org/) instead of numba.
What are some alternatives?
NetworkX - Network Analysis in Python
jax - Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
Dask - Parallel computing with task scheduling
cupy - NumPy & SciPy for GPU
Pyjion - Pyjion - A JIT for Python based upon CoreCLR
SymPy - A computer algebra system written in pure Python
statsmodels - Statsmodels: statistical modeling and econometrics in Python
XNNPACK - High-efficiency floating-point neural network inference operators for mobile, server, and Web
rife-ncnn-vulkan - RIFE, Real-Time Intermediate Flow Estimation for Video Frame Interpolation implemented with ncnn library
cudf - cuDF - GPU DataFrame Library
julia - The Julia Programming Language
PyMC - Bayesian Modeling and Probabilistic Programming in Python