Our great sponsors
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
Yes, you can, actually already exist images with Cuda installed https://hub.docker.com/r/nvidia/cuda . To be able to use the GPU device within the docker container you need to install `nvidia-container-runtime` https://github.com/NVIDIA/nvidia-container-runtime.
Yes it is possible: you have to install NVIDIA Container Toolkit. You can follow this guide. Once you have this toolkit you can create your own image or pull some offical image with pytorch or your Cuda software (your docker image must have the same Cuda version that you have installed in your host machine)
You can use the cuda dockerfile as reference: https://gitlab.com/nvidia/container-images/cuda/-/blob/master/Dockerfile
Related posts
- Plex setup through Docker + Nvidia card, but hardware acceleration stops working after some time
- Seeking Guidance on Leveraging Local Models and Optimizing GPU Utilization in containerized packages
- Which GPU for HW transcoding in PMS: Intel Arc or Nvidia?
- [D] Would a Tesla M40 provide cheap inference acceleration for self-hosted LLMs?
- Help! Accelerated-GPU with Cuda and CuPy