ncnn
HIPIFY
Our great sponsors
ncnn | HIPIFY | |
---|---|---|
12 | 11 | |
19,234 | 318 | |
2.1% | - | |
9.4 | 0.0 | |
4 days ago | 5 months ago | |
C++ | C++ | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ncnn
-
AMD Funded a Drop-In CUDA Implementation Built on ROCm: It's Open-Source
ncnn uses Vulkan for GPU acceleration, I've seen it used in a few projects to get AMD hardware support.
https://github.com/Tencent/ncnn
-
[D] Best way to package Pytorch models as a standalone application
They're using NCNN to package the model. Have a look. https://github.com/Tencent/NCNN
-
Realtime object detection android app
Hi. Here is my prefered android app for realtime objet detection: https://github.com/nihui/ncnn-android-nanodet ; https://github.com/Tencent/ncnn contains a lot of android demo app for a lot of models.
- ncnn: High-performance neural network inference framework optimized for mobile
-
Esp32 tensorflow lite
ncnn home page: https://github.com/Tencent/ncnn
-
MMDeploy: Deploy All the Algorithms of OpenMMLab
ncnn
-
Draw Things, Stable Diffusion in your pocket, 100% offline and free
Yes, Android devices tend to have bigger RAMs, making running 1024x1024 possible (this is not possible at all on iPhones, which could peak around 5GiB memory with my current implementation, some serious engineering required to bring that down on iPhone devices). The problem is I am not sure about speed. I would likely switch to NCNN (https://github.com/Tencent/ncnn) as the backend which have a decent Vulkan computing kernel support. It is definitely a possibility and there is a path to do that.
- What’s New in TensorFlow 2.10?
-
[Technical Article] OCR Upgrade
As the leading open-source inference framework in China and in the world, what we like are its almost zero cost cross-platform capability, high inference speed, and minimal deployment volume. (Project address: https://github.com/Tencent/ncnn)
-
Is there a functioning neural netowork or backbone written in pure C language only?
If you’re not planning on training the neural net on an embedded device and just do inference, this might interest you: https://github.com/Tencent/ncnn
HIPIFY
-
AMD Hip SDK: Making CUDA Applications Run Across Consumer, Pro GPUs and APUs
Right. I can't speak to its correctness/completeness as I've only done a quick installation and smoke test of the ROCm/HIP/MIOpen stack, but there's even a tool that automates the translation [1].
[1] https://github.com/ROCm-Developer-Tools/HIPIFY
- How to run Llama 13B with a 6GB graphics card
-
How Nvidia’s CUDA Monopoly in Machine Learning Is Breaking
From https://news.ycombinator.com/item?id=32904285 re: AMD Rocm, HIPIFY, :
>> ROCm-Developer-Tools/HIPIFY https://github.com/ROCm-Developer-Tools/HIPIFY :
>> hipify-clang is a clang-based tool for translating CUDA sources into HIP sources. It translates CUDA source into an abstract syntax tree, which is traversed by transformation matchers. After applying all the matchers, the output HIP source is produced.
> ROCm-Developer-Tools/HIPIFY https://github.com/ROCm-Developer-Tools/HIPIFY :
>> hipify-clang is a clang-based tool for translating CUDA sources into HIP sources. It translates CUDA source into an abstract syntax tree, which is traversed by transformation matchers. After applying all the matchers, the output HIP source is produced.
> AMD ROcm supports Pytorch, TensorFlow, MlOpen, rocBLAS on NVIDIA and AMD GPUs: https://rocmdocs.amd.com/en/latest/Deep_learning/Deep-learni...
-
Stable Diffusion on AMD RDNA3
> Thus, the idea is that through typically negligible effort porting to HiP, your code becomes vendor-independent.
Here, the big AMD mistake was to rename those function prefixes in the first place. It's a mistake that they could have avoided...
What a lot of SW codebases did to support AMD (see PyTorch code notably): codebase is still CUDA, have the conversion pass to HIP done at build time.
See https://github.com/ROCm-Developer-Tools/HIPIFY/blob/amd-stag... for the Perl script to do it.
Then comes the problem of AMD not supporting ROCm HIP on most of their hardware or user base.
On Windows, the ROCm HIP SDK is private and only available under NDA. This means that while you can use Blender w/ HIP on Windows, the Blender builds that you compile yourself will not be able to use ROCm HIP.
On Linux, the supported GPUs are few and far between, Vega20 onwards are supported today. APUs, RDNA1, and lower end RDNA2 w/o unsupported hacks (6700 XT and below) are excluded.
-
AI Seamless Texture Generator Built-In to Blender
https://rocmdocs.amd.com/en/latest/Deep_learning/Deep-learni...
RadeonOpenCompute/ROCm_Documentation: https://github.com/RadeonOpenCompute/ROCm_Documentation
ROCm-Developer-Tools/HIPIFYhttps://github.com/ROCm-Developer-Tools/HIPIFY :
> hipify-clang is a clang-based tool for translating CUDA sources into HIP sources. It translates CUDA source into an abstract syntax tree, which is traversed by transformation matchers. After applying all the matchers, the output HIP source is produced.
ROCmSoftwarePlatform/gpufort: https://github.com/ROCmSoftwarePlatform/gpufort :
> GPUFORT: S2S translation tool for CUDA Fortran and Fortran+X in the spirit of hipify
ROCm-Developer-Tools/HIP https://github.com/ROCm-Developer-Tools/HIP:
> HIP is a C++ Runtime API and Kernel Language that allows developers to create portable applications for AMD and NVIDIA GPUs from single source code. [...] Key features include:
> - HIP is very thin and has little or no performance impact over coding directly in CUDA mode.
> - HIP allows coding in a single-source C++ programming language including features such as templates, C++11 lambdas, classes, namespaces, and more.
> - HIP allows developers to use the "best" development environment and tools on each target platform.
> - The [HIPIFY] tools automatically convert source from CUDA to HIP.
> - * Developers can specialize for the platform (CUDA or AMD) to tune for performance or handle tricky cases.*
-
单位要求五一之后上缴旧电脑,统一换国产新电脑、新系统,由于不兼容windows软件,所以还要装个windows模拟器,导致办公效率倒退10年。主任吐槽说,这不是用落后代替先进么,我心说连他都看出来了。
并且有一个自动转换工具 https://github.com/ROCm-Developer-Tools/HIPIFY https://rocmdocs.amd.com/en/latest/Programming_Guides/HIP-porting-guide.html
- Hipify: Convert CUDA to Portable C++ Code
- Hipify: Convert CUDA to Portable Hip C++ Code
-
Deep Learning options on Radeon RX 6800
It might be worth checking out HIPIFY, which lets you automatically convert CUDA code to vendor neutral code that can be run on any GPU. Disclaimer, I have never used it and have no idea how it works.
-
Will NVIDIA's cryptocurrency limiter interfere with nouveau drivers?
CUDA zu AMD HIP conversion: https://github.com/ROCm-Developer-Tools/HIPIFY
What are some alternatives?
XNNPACK - High-efficiency floating-point neural network inference operators for mobile, server, and Web
ZLUDA - CUDA on AMD GPUs
rife-ncnn-vulkan - RIFE, Real-Time Intermediate Flow Estimation for Video Frame Interpolation implemented with ncnn library
ROCm - AMD ROCm™ Software - GitHub Home [Moved to: https://github.com/ROCm/ROCm]
deepdetect - Deep Learning API and Server in C++14 support for Caffe, PyTorch,TensorRT, Dlib, NCNN, Tensorflow, XGBoost and TSNE
llama-cpp-python - Python bindings for llama.cpp
netron - Visualizer for neural network, deep learning and machine learning models
rocm-build - build scripts for ROCm
darknet - Convolutional Neural Networks
kompute - General purpose GPU compute framework built on Vulkan to support 1000s of cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). Blazing fast, mobile-enabled, asynchronous and optimized for advanced GPU data processing usecases. Backed by the Linux Foundation.
RPi_64-bit_Zero-2-image - Raspberry Pi Zero 2 W 64-bit OS image with OpenCV, TensorFlow Lite and ncnn Framework.
HIP - HIP: C++ Heterogeneous-Compute Interface for Portability