PythOps

compile deeplearning libraries for jetson nano

Last update: 15 February 2020

OpenCV

Install the dependencies
$ dependencies=(build-essential
              cmake
              pkg-config
              libavcodec-dev
              libavformat-dev
              libswscale-dev
              libv4l-dev
              libxvidcore-dev
              libavresample-dev
              python3-dev
              libtbb2
              libtbb-dev
              libtiff-dev
              libjpeg-dev
              libpng-dev
              libtiff-dev
              libdc1394-22-dev
              libgtk-3-dev
              libcanberra-gtk3-module
              libatlas-base-dev
              gfortran
              wget
              unzip)

$ sudo apt install -y ${dependencies[@]}
Download the OpenCV source code
$ wget https://github.com/opencv/opencv/archive/4.2.0.zip -O opencv-4.2.0.zip
$ wget https://github.com/opencv/opencv_contrib/archive/4.2.0.zip -O opencv_contrib-4.2.0.zip
$ unzip opencv-4.2.0.zip 
$ unzip opencv_contrib-4.2.0.zip
$ mkdir opencv-4.2.0/build 
$ cd opencv-4.2.0/build
Configure the build
$ cmake -D CMAKE_BUILD_TYPE=RELEASE \
      -D WITH_CUDA=ON \
      -D CUDA_ARCH_PTX="" \
      -D CUDA_ARCH_BIN="5.3,6.2,7.2" \
      -D WITH_CUBLAS=ON \
      -D WITH_LIBV4L=ON \
      -D BUILD_opencv_python3=ON \
      -D BUILD_opencv_python2=OFF \
      -D BUILD_opencv_java=OFF \
      -D WITH_GSTREAMER=OFF \
      -D WITH_GTK=ON \
      -D BUILD_TESTS=OFF \
      -D BUILD_PERF_TESTS=OFF \
      -D BUILD_EXAMPLES=OFF \
      -D OPENCV_EXTRA_MODULES_PATH=../../opencv_contrib-4.2.0/modules \
      ..
Build the package
$ make -j4
Install the package
$ sudo make install
Verification
$ python3 -c "import cv2; print(cv2.__version__)"
4.2.0



TensorFlow 2

The build takes a crazy amount of time. You can download the official python wheel here https://docs.nvidia.com/deeplearning/frameworks/install-tf-jetson-platform/index.html.

Install the dependencies
  1. Python packages
$ pip install -U --user pip six numpy wheel setuptools mock
$ pip install -U --user keras_applications keras_preprocessing --no-deps
  1. bazel
    At this date, the official bazel builds do not include a build for arm64 so you need to compile it yourself.
    I've already compiled it , so you can download it here. Otherwise you can compile it by following these instructions:
$ sudo apt install -y  default-jdk default-jre unzip zip build-essential python3
$ wget https://github.com/bazelbuild/bazel/releases/download/0.26.1/bazel-0.26.1-dist.zip
$ unzip bazel-0.26.1-dist.zip
$ cd bazel-0.26.1/
$ env EXTRA_BAZEL_ARGS="--host_javabase=@local_jdk//:jdk" bash ./compile.sh
$ sudo cp output/bazel /usr/local/bin
$ bazel version
Build label: 0.26.1- (@non-git)
Download the TensorFlow source code
$ wget https://github.com/tensorflow/tensorflow/archive/v2.0.0.zip -O tensorflow-v2.0.0.zip
$ unzip tensorflow-v2.0.0.zip
$ cd tensorflow-2.0.0/
Configure the build
$ ./configure
Build the pip package
$ bazel build \
 --local_ram_resources=2048 \
 --config=opt \
 --config=cuda \
 --config=noignite \
 --config=nokafka \
 --config=noaws  \
 --config=nohdfs \
 --config=nonccl \
 //tensorflow/tools/pip_package:build_pip_package
.
.
.
INFO: Elapsed time: 160775.712s, Critical Path: 12635.39s
INFO: 23416 processes: 23416 local.
INFO: Build completed successfully, 29488 total actions
Build the package
$ ./bazel-bin/tensorflow/tools/pip_package/build_pip_package --gpu /tmp/tensorflow_pkg
Install the pip package
$ pip install --user  /tmp/tensorflow_pkg/
Verification
$ python3 -c "import tensorflow as tf; print(tf.test.is_gpu_available())"
2019-12-01 15:55:07.912648: W tensorflow/core/platform/profile_utils/cpu_utils.cc:98] Failed to find bogomips in /proc/cpuinfo; cannot determine CPU frequency
2019-12-01 15:55:07.913195: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x43543c0 executing computations on platform Host. Devices:
2019-12-01 15:55:07.913249: I tensorflow/compiler/xla/service/service.cc:175]   StreamExecutor device (0): Host, Default Version
2019-12-01 15:55:07.921130: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1
2019-12-01 15:55:08.088722: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2019-12-01 15:55:08.089025: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x4351e80 executing computations on platform CUDA. Devices:
2019-12-01 15:55:08.089099: I tensorflow/compiler/xla/service/service.cc:175]   StreamExecutor device (0): NVIDIA Tegra X1, Compute Capability 5.3
2019-12-01 15:55:08.089678: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2019-12-01 15:55:08.089820: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties: 
name: NVIDIA Tegra X1 major: 5 minor: 3 memoryClockRate(GHz): 0.9216
pciBusID: 0000:00:00.0
2019-12-01 15:55:08.107229: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2019-12-01 15:55:08.207822: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0
2019-12-01 15:55:08.308630: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10.0
2019-12-01 15:55:08.453648: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10.0
2019-12-01 15:55:08.617065: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10.0
2019-12-01 15:55:08.710567: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10.0
2019-12-01 15:55:08.975594: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2019-12-01 15:55:08.975925: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2019-12-01 15:55:08.976308: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2019-12-01 15:55:08.976503: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0
2019-12-01 15:55:08.976729: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2019-12-01 15:55:08.978404: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-12-01 15:55:08.978496: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165]      0 
2019-12-01 15:55:08.978540: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1178] 0:   N 
2019-12-01 15:55:08.980863: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2019-12-01 15:55:08.981162: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2019-12-01 15:55:08.981344: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/device:GPU:0 with 2405 MB memory) -> physical GPU (device: 0, name: NVIDIA Tegra X1, pci bus id: 0000:00:00.0, compute capability: 5.3)
True



PyTorch

NVIDIA provides the python wheels for PyTorch. Check this link for more information.
https://devtalk.nvidia.com/default/topic/1049071/jetson-nano/pytorch-for-jetson-nano-version-1-3-0-now-available/


Dlib

Download the latest stable version

$ wget https://github.com/davisking/dlib/archive/v19.19.zip 
$ unzip v19.19.zip

Build and install the package

$ cd dlib-19.19
$ python3 setup.py install --user

Recommended reading