OpenVINO™ comes with a wide variety of pre-trained models (trained by Intel team or Computer Vision community) and code samples, which can be used together easily to prototype a demo or a proof-of-concept faster in a variety of segments such as digital surveillance and retail. It is the perfect first step in evaluating the perfect Intel architecture for a given solution.
This presentation will showcase a variety of possible prototype solutions (with live demos) that can be performed for different segments. It will present the different code samples available and explain how they can be combined with Intel models to create a prototype.Speaker(s):
Real-time communication has been dominated by a wide range of proprietary solutions. Due to their lack of interoperability, real-time networks are often fragmented. This challenge has grown substantially over the last years and is currently one of the main obstacles in the digitization of production. Many approaches in the context of Industry 4.0, e.g. digital twins or big data, are typically located in an IT environment but require access to sensors, actors and control systems, which are located in an OT network. In order to fulfill these requirements, converged networks integrating IT and OT are needed.
In the context of Time Sensitive Networking (TSN), standard Ethernet (IEEE 802.1) is being extended with deterministic capabilities. As a result, standard Ethernet will fulfill the requirements of the field level, while at the same time enabling converged networks. The presentation provides an overview of the state-of-the-art real-time communication and analyzes the requirements regarding converged networks. In this presentation, the TSN technology, its current state, key features and use cases are introduced. We also talk about the open-source project AccessTSN, developing a Linux reference architecture including configuration, time synchronization and scheduling. Specific examples of how to use the new features are also given.Speaker(s):
The Intel® Neural Compute Stick 2 (NCS2) enables rapid prototyping, validation and deployment of Deep Neural Network (DNN) inference applications at the edge. Its low-power VPU architecture enables an entirely new segment of AI applications that aren’t reliant on cloud connectivity. The Intel® NCS2, combined with the Intel® Distribution of OpenVINO™ toolkit, allows developers to profile, tune and deploy Caffe or TensorFlow trained Convolutional Neural Networks (CNN) on low-power applications that require real-time inferencing.
This presentation will provide an overview of the key challenges addressed in delivering AI products and solutions at the edge, example use cases, and the typical journey to develop, prototype and commercialize solutions.Speaker(s):
From cameras to cloud, Intel offers a range of heterogeneous hardware and software to accelerate development. In this session we’ll focus on the network edge where the action is. From smart cameras, edge servers and intelligent appliances for applications such as industrial automation, public safety, retail and health technology, developers are trying to harness and analyze data at the edge. The presentation will dive into the hardware platforms for AI at the edge, from general purpose Intel CPUs to special-purpose Intel Movidius VPU for vision processing, and share details about Intel’s industry leading software tools, including the Intel® Distribution of OpenVINO™ Toolkit, for accelerating the development and integration of intelligent vision solutions. The session will offer an understanding of the features and capabilities of Intel’s edge AI portfolio.Speaker(s):
This presentation provides an overview of the development of intelligent voice data analysis from a machine learning (ML) perspective - a historical, state-of-the-art overview and a peak into some future trends in the field of artificial intelligence. The session focuses on some areas within the voice recognition domain which seem to be important to apply ML in medical diagnosis. It also describes a recently developed method of detecting respiratory problems quickly, by recognizing the changes in voice over time using ML algorithms.Speaker(s):
In the last decade, the world of computing has shifted from a rigidly defined client-server model to one to where the majority of electronic devices are considered to be computers, regardless of form factor or purpose. The ubiquitous nature of this change is popularly known as the Internet of Things (IoT), and it brings along a new wave of business challenges. The traditional, monolithic product development is often distributed across multiple projects and constrained by the disparate cycles and priorities of these different components. Strong interdependencies and blurred boundaries in the edge device stack result in fragmentation, slow updates, security issues, increased cost, and reduced reliability of platforms.
With the IoT shift in mind, this presentation introduces and showcases a DevOps-driven model based on Ubuntu Core - a transactional, hardened version of the Ubuntu operating system - and Snaps - self-contained and isolated applications with automatic updates, designed to offer a scalable alternative to the existing paradigm, with focus on reliability and accelerated development cycle.Speaker(s):
In the IoT world, where multiple devices, from small drones to autonomous vehicles aim to mimic and replace human localization and navigation, achieving 6 Degrees of Freedom (6DOF) tracking is highly important.
The Intel® RealSense™ Tracking Camera T265 provides an all-inclusive high-quality tracking solution that input signals through Simultaneous Localization & Mapping (SLAM) algorithm on an Intel® Movidius™ Myriad™ 2 VPU. The power-efficient camera consumes less than 1.5 Watt, features a small form factor and provides low latency poses at a sampling rate of 200Hz, making the camera platform agnostic.
First, we’ll present the use of the T265 with the Depth Camera D435 on a mobile robot for occupancy mapping, path planning, obstacle avoidance and autonomous navigation. The demo uses the open-source Realsense SDK & the corresponding Robot Operating System (ROS) packages. Second, we’ll showcase the use of the T265 along with the D415 Depth camera in an AR application that mimics a technician's lab.Speaker(s):
Advantech is the leading IoT solutions provider integrating IoT building blocks from computing platforms, connectivity and sensing/control perspectives. Customers can easily accelerate the IoT implementation by using these building blocks with Advantech WISE-PaaS Edge Intelligence Cloud Platform. In this session, Advantech will introduce an IoT strategy and edge intelligence that will collaborate with Intel in MRS and RRK development and related use cases. From this session, audiences can learn how Advantech can help them jump over the chasm and embrace the digital transformation.Speaker(s):
Industrial IOT and today’s autonomous environments require faster and more efficient engineering processes with reduced costs. Smart Factories and Industrial Systems of the Autonomous future world will provide superior operations and new markets with the business value of making things smart and intelligent, enabling safe, efficient and comfortable living. Smart Factories and Automotive IOT systems are predicted to be the fastest growing sectors of IOT. Some of the challenges include standards, security, driving safety, approaches for superior monitoring and the required Machine Learning cycles.
IOT has limitless potential with the use of sensors to collect data, use of location systems and RFID to monitor all kinds of situational data. Today, Cameras and Video Technologies are relatively inexpensive and widely deployed and along with Object Detection technologies can be used to answer a variety of questions.
Computer vision workflows and OpenVINO toolkit provides visual inferences using neural network optimization including support for deep learning algorithms that help accelerate smart video applications.
We discuss best practices for industrial and automotive applications including deployment and monitoring to increase flexibility, scaling, and simplicity for end-to-end applications. These involve developing middleware solutions for smart mobility networks, enabling next generation technologies and associated Software-as-a-Service and Platform-as-a-Service infrastructure for rapid IOT Deployments, and infusing AI and Machine Learning systems. Realizing smarter Industrial IOT requires optimization and improved performance using frameworks and topologies supported by accelerator tools with major advantages. These provide tremendous insights into options for overall performance improvements and accelerating adoption.Speaker(s):