-
Yes, you can, actually already exist images with Cuda installed https://hub.docker.com/r/nvidia/cuda . To be able to use the GPU device within the docker container you need to install `nvidia-container-runtime` https://github.com/NVIDIA/nvidia-container-runtime.
-
CodeRabbit
CodeRabbit: AI Code Reviews for Developers. Revolutionize your code reviews with AI. CodeRabbit offers PR summaries, code walkthroughs, 1-click suggestions, and AST-based analysis. Boost productivity and code quality across all major languages with each PR.
-
Yes it is possible: you have to install NVIDIA Container Toolkit. You can follow this guide. Once you have this toolkit you can create your own image or pull some offical image with pytorch or your Cuda software (your docker image must have the same Cuda version that you have installed in your host machine)
-
You can use the cuda dockerfile as reference: https://gitlab.com/nvidia/container-images/cuda/-/blob/master/Dockerfile
Related posts
-
Plex setup through Docker + Nvidia card, but hardware acceleration stops working after some time
-
Seeking Guidance on Leveraging Local Models and Optimizing GPU Utilization in containerized packages
-
Which GPU for HW transcoding in PMS: Intel Arc or Nvidia?
-
[D] Would a Tesla M40 provide cheap inference acceleration for self-hosted LLMs?
-
Help! Accelerated-GPU with Cuda and CuPy