Summary: | Computer vision tasks such as image classification have prevalent use and are greatly aided by the development of deep learning techniques, in particular CNN. Performing such tasks on specialized embedded GPU boards can have intriguing prospects in edge computing development. In this study, popular CNN model architectures including GoogLeNet, ResNet and VGG were implemented on the new Jetson Xavier NX Developer Kit. The models are implemented using different deep learning frameworks including PyTorch, TensorFlow and Caffe, the latter involving TensorRT, the Nvidia optimization tool for inference model. The model implementations were evaluated based on various metrics including timing and resource utilization and the results were compared. This study draws the conclusion that DL-based computer vision tasks are compute-bound even on more powerful GPU devices, and the choice of frameworks has a significant effect on the performance of the inference task. In particular, TensorRT produces very significant improvement in terms of inference timing, and scales well across model architecture and model depth.
|