Computer Vision (CV) solutions often experience bandwidth, storage and latency limitations. Scaling CV applications to meet Power-and-Performance (PnP) KPIs and cost requirements can be expensive, and deploying AI models on different hardware may require a complete solution redesign. By experience, it is a non-trivial process to efficiently integrate deep learning capabilities into traditional CV applications without significant performance overhead. To address these topics, Intel has developed optimized AI Frameworks, Intel® System Studio suite and Intel® Distribution of OpenVINO™ Toolkit mainly targeted for embedded visual computing developers.
In this presentation, we will be covering the following topics: Deep Learning training using Intel® Optimization for TensorFlow. Using the common inference API for heterogeneous execution using Intel’s Deep Learning Deployment Toolkit (DLDT) for Intel processors (for CPUs), Intel® Processor Graphics (for GPUs), and vision processing units (VPUs), as well as Intel optimized OpenCV and OpenVX libraries for traditional computer vision. Develop and profile CV applications natively on the target platform using Intel® System Studio, which is a cross-platform tool suite purpose-built to simplify system bring-up, and improve system and application performance on Intel platforms.