Skip to main content
Version: Next

For Windows 10 with GPU

Updated 2025.01.24

Verify Driver Installation in Windows Environment

1. Checking Graphics Card Installation in Windows

  1. Access Device Manager
  2. Enter Display Adapters
  3. Check NVIDIA Graphics Card

2. Selecting the Graphics Card in Nvidia Control Panel

  1. Right-click on the desktop
  2. Select Nvidia Control Panel
  3. Select Configure PhysX from the left menu
  4. Select Nvidia Graphics Card in PhysX Processor selection

3. Installing the Graphics Driver

  1. Visit the link https://www.nvidia.com/en-us/drivers/
  2. Search for your graphics card model in Manual Driver Search
  3. Click the Find button if your model appears
  4. Click the View button for the latest driver
  5. Click the final Download button

4. Check if the driver is installed in the Windows Environment using the command

nvidia-smi
  • Driver Version: Current Nvidia Driver version of the GPU
  • CUDA Version: Recommended CUDA Version compatible with the current driver

Installing CUDA toolkit in WSL Environment

1. Enter WSL Environment and Install CUDA Toolkit

  1. Visit the link https://developer.nvidia.com/cuda-toolkit-archive
  2. Select the CUDA Toolkit version close to the recommended CUDA Version
  3. Check the specs of your WSL environment using the command below
cat /etc/os-release

2. Select specs for the following

  • Operating System: Linux
  • Architecture: x86_64
  • Distribution: Ubuntu
  • Version: 20.04
  • Installer Type: runfile(local)

3. Execute commands according to the specs

$ wget https://developer.download.nvidia.com/compute/cuda/12.6.2/local_installers/cuda_12.6.2_560.35.03_linux.run
$ chmod +x cuda_12.6.2_560.35.03_linux.run
$ sudo ./cuda_12.6.2_560.35.03_linux.run --silent --toolkit

(If no error occurs, you do not need to execute)

$ sudo apt update
$ sudo apt install gcc g++

5. Check if CUDA Toolkit is installed using the command

ls /usr/local/ | grep cuda

6. Add CUDA path in the Linux environment within WSL

$ sudo vi ~/.bashrc

# Add the following script
export PATH=/usr/local/cuda-12.6/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda-12.6/lib64:$LD_LIBRARY_PATH

# Save and execute the following script
$ source ~/.bashrc

7. Verify CUDA version

$ nvcc -V

Installing Nvidia container CUDA-toolkit in WSL Environment

1. Install Nvidia Container Toolkit with the command below in WSL environment

$ distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
$ curl -s -L https://nvidia.github.io/libnvidia-container/gpgkey | sudo apt-key add -
$ curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.list | sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
$ sudo apt-get update
$ sudo apt-get install -y nvidia-container-toolkit

2. Edit Docker configuration in WSL Environment

  • Access and edit the path /etc/docker/daemon.json
{
"insecure-registries":["...excluded"],
"default-runtime": "nvidia",
"runtimes": {
"nvidia": {
"path": "/usr/bin/nvidia-container-runtime",
"runtimeArgs": []
}
}
}

3. Enable K8S (Kubernetes) DevicePlugins in WSL Environment

  • Enable Device Plugin with the command
$ kubectl delete daemonset nvidia-device-plugin-daemonset -n kube-system # Delete existing one if any

$ kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v0.14.1/nvidia-device-plugin.yml
  • Verify Device Plugin activation
$ kubectl get pods -n kube-system | grep nvidia-device-plugin

4. Verify inference log after running Edge app and deploying model

  • Check log using Pytorch
PyTorch version: 2.0.1+cu117
CUDA available: True
CUDA version: 11.7