ROCm-docker
ncnn
ROCm-docker | ncnn | |
---|---|---|
3 | 12 | |
392 | 19,275 | |
1.0% | 1.2% | |
5.1 | 9.4 | |
23 days ago | 4 days ago | |
Shell | C++ | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ROCm-docker
-
AMD Funded a Drop-In CUDA Implementation Built on ROCm: It's Open-Source
https://rocm.docs.amd.com/projects/install-on-linux/en/lates... links to ROCm/ROCm-docker: https://github.com/ROCm/ROCm-docker which is the source of docker.io/rocm/rocm-terminal: https://hub.docker.com/r/rocm/rocm-terminal :
docker run -it --device=/dev/kfd --device=/dev/dri --group-add video rocm/rocm-terminal
-
Stable Diffusion PR optimizes VRAM, generate 576x1280 images with 6 GB VRAM
Not sure about the 6600, but there is a guide for Linux at least:
https://m.youtube.com/watch?v=d_CgaHyA_n4&feature=emb_logo
And this is somehow relevant (possibly), as I kept the link open.
https://github.com/RadeonOpenCompute/ROCm-docker/issues/38
-
It's working perfectly under Linux
As for the Docker image, I suppose you could compile the image (https://hub.docker.com/r/rocm/pytorch) by yourself using the sources (https://github.com/RadeonOpenCompute/ROCm-docker#building-images), which seems to be quite a bit of work. Better, you could just use an older tag of the upstream image, eg. rocm4.1.1_ubuntu18.04_py3.6_pytorch instead of rocm4.2_ubuntu18.04_py3.6_caffe2 or latest . Just make sure your container version matches your host ROCm version.
ncnn
-
AMD Funded a Drop-In CUDA Implementation Built on ROCm: It's Open-Source
ncnn uses Vulkan for GPU acceleration, I've seen it used in a few projects to get AMD hardware support.
https://github.com/Tencent/ncnn
-
[D] Best way to package Pytorch models as a standalone application
They're using NCNN to package the model. Have a look. https://github.com/Tencent/NCNN
-
Realtime object detection android app
Hi. Here is my prefered android app for realtime objet detection: https://github.com/nihui/ncnn-android-nanodet ; https://github.com/Tencent/ncnn contains a lot of android demo app for a lot of models.
- ncnn: High-performance neural network inference framework optimized for mobile
-
Esp32 tensorflow lite
ncnn home page: https://github.com/Tencent/ncnn
-
MMDeploy: Deploy All the Algorithms of OpenMMLab
ncnn
-
Draw Things, Stable Diffusion in your pocket, 100% offline and free
Yes, Android devices tend to have bigger RAMs, making running 1024x1024 possible (this is not possible at all on iPhones, which could peak around 5GiB memory with my current implementation, some serious engineering required to bring that down on iPhone devices). The problem is I am not sure about speed. I would likely switch to NCNN (https://github.com/Tencent/ncnn) as the backend which have a decent Vulkan computing kernel support. It is definitely a possibility and there is a path to do that.
- What’s New in TensorFlow 2.10?
-
[Technical Article] OCR Upgrade
As the leading open-source inference framework in China and in the world, what we like are its almost zero cost cross-platform capability, high inference speed, and minimal deployment volume. (Project address: https://github.com/Tencent/ncnn)
-
Is there a functioning neural netowork or backbone written in pure C language only?
If you’re not planning on training the neural net on an embedded device and just do inference, this might interest you: https://github.com/Tencent/ncnn
What are some alternatives?
awesome-kubernetes - A curated list for awesome kubernetes sources :ship::tada:
XNNPACK - High-efficiency floating-point neural network inference operators for mobile, server, and Web
AiDungeon2-Docker-ROCm - Runs an AIDungeon2 fork in Docker on AMD ROCm hardware.
rife-ncnn-vulkan - RIFE, Real-Time Intermediate Flow Estimation for Video Frame Interpolation implemented with ncnn library
ZLUDA - CUDA on AMD GPUs
deepdetect - Deep Learning API and Server in C++14 support for Caffe, PyTorch,TensorRT, Dlib, NCNN, Tensorflow, XGBoost and TSNE
stable-diffusion - Go to lstein/stable-diffusion for all the best stuff and a stable release. This repository is my testing ground and it's very likely that I've done something that will break it.
netron - Visualizer for neural network, deep learning and machine learning models
docker-elk - The Elastic stack (ELK) powered by Docker and Compose.
darknet - Convolutional Neural Networks
Dokku - A docker-powered PaaS that helps you build and manage the lifecycle of applications
RPi_64-bit_Zero-2-image - Raspberry Pi Zero 2 W 64-bit OS image with OpenCV, TensorFlow Lite and ncnn Framework.