Watching A Replay
To access the archived sessions(s) click “Play Session” next to the desired session below. For your convenience and ease of locating sessions of interest to you, we recommend using the event Filter by Date or Track which will allow you to sort sessions by conference/event track.
Filter by Track Key:
Edge-based video analytics will transform IoT to create new applications and business opportunities. Gorilla’s partnership with Intel and development of edge AI computer vision, utilizing Intel’s OpenVINO™ optimization - combined with Movidius and FPGA hardware, has seen vast efficiency and performance improvements on Atom, Core and Xeon based devices. Till now, most video analytics platforms have focused on government and commercial projects, and settings such as smart cities, offices and retail. There is still a huge area for innovation and development in ‘every day spaces’ to improve on things like unmanned transportation stations, work sites, factories, automobile, healthcare, home security and energy savings. More innovative use cases will come with better video analytics and business insights. It is apparent that continuing to develop and improve processing unstructured data analysis, such as video analytics on edge devices, will send the IoT market to the next level by presenting high-value and innovative applications and contributions to the entire tech ecosystem.
IEI Integration Corp. is a leading industrial computer provider with more than 20 year experience in the field. IEI continues to promote its own-brand products as well as serving ODM vertical markets to offer complete and professional services. IEI's products are applied in Intel solution-based applications such as factory automation, healthcare, transportation , networking appliances, security, building automation, industrial IoT (Internet of Things) and also AI (Artificial Intelligence). In this session we will fully introduce IEI’s AI accelerator solution “Mustang Series” and show what’s the features , advantages and benefits to the customer. “Mustang Series” ware build based on Intel CPU, FPGA and VPU solutions.
Computer vision has always become the most popular field of AI application. However, industry clients have been facing problems, such as not able to find suitable algorithms in the market or differentiate the quality of algorithms. This session will analyze the current situation of computer vision technology and how to use Intel acceleration toolkit to improve the effect of algorithm application.
Mainflux is an open source, Apache 2.0 licensed, high performance and scalable IoT middleware platform. Mainflux is designed to handle and process data coming out of your IoT devices, edge gateways, LoRa gateways and deployments. Mainflux is developed using the microservices architecture and Go programming language. Due to its design and technology choices, Mainflux can be deployed on gateway type devices on a single server or scaled using Kubernetes to multidata center systems. Mainflux is cloud-agnostic and can be set up on any public or private cloud.
This session will cover the main features of the Mainflux IoT platform and show how to create a fully featured IoT solution using Mainflux.
We will explain:
With insights from sensors and smart devices, physical stores are now able to challenge the established ways of doing business, offering more tailored experiences and incorporating digital elements into the shoppers’ journey. Digital transformation in the retail business answers the demands of customers by creating a new concept of connected stores to bring interactive and customized shopping experiences. This shift is crucial for bricks and mortar shops in order to address the synergies between the digital and physical worlds. Retailers can now leverage real time data that allows the prediction of customer trends in physical stores. Access to this data helps improve the shopping experience, by making use of available technological solutions, along with the efficiency and sales.
While Industry 4.0 (14.0) is adopted by the global players with a high pace, there is a risk that the SMEs – the backbone for industrial innovations – would be left behind. In Germany, SMEs are responsible for 97% of all exports and make for 1300 of the worldwide hidden champions. Unfortunately, most of the available I4.0 solution does not match the SME requirements, such as independence of specific vendors, moderate deployment costs and data sovereignty. To solve this problem, the Industry Business Network 4.0 (IBN 4.0) created a joint SME-ready solution based on open components. This talk will present the requirements, architecture and current state of the implementation of IndustryFusion and will detail about StarlingX and the Open IoT Service Platform. The session is presented by Konstantin Kernschmidt (Project Lead, IBN 4.0), Martin Mohring (CTO of 5e EcoSystems) and Marcel Wagner (Software ecosystem enabler, Intel).
OpenVINO™ comes with a wide variety of pre-trained models (trained by Intel team or Computer Vision community) and code samples, which can be used together easily to prototype a demo or a proof-of-concept faster in a variety of segments such as digital surveillance and retail. It is the perfect first step in evaluating the perfect Intel architecture for a given solution.
This presentation will showcase a variety of possible prototype solutions (with live demos) that can be performed for different segments. It will present the different code samples available and explain how they can be combined with Intel models to create a prototype.
The holy grail of the IoT is the ability to easily distribute the intelligence of your application across cloud and the edge. Being able to run analytics, AI or store data at the edge addresses many common and key enterprise IoT scenarios. Learn how to easily create deployments for IoT devices that include AI, machine learning, stream analytics, as well as your own custom code on devices smaller than a Raspberry Pi.
The Intel® Neural Compute Stick 2 (NCS2) enables rapid prototyping, validation and deployment of Deep Neural Network (DNN) inference applications at the edge. Its low-power VPU architecture enables an entirely new segment of AI applications that aren’t reliant on cloud connectivity. The Intel® NCS2, combined with the Intel® Distribution of OpenVINO™ toolkit, allows developers to profile, tune and deploy Caffe or TensorFlow trained Convolutional Neural Networks (CNN) on low-power applications that require real-time inferencing.
This presentation will provide an overview of the key challenges addressed in delivering AI products and solutions at the edge, example use cases, and the typical journey to develop, prototype and commercialize solutions.
Real-time communication has been dominated by a wide range of proprietary solutions. Due to their lack of interoperability, real-time networks are often fragmented. This challenge has grown substantially over the last years and is currently one of the main obstacles in the digitization of production. Many approaches in the context of Industry 4.0, e.g. digital twins or big data, are typically located in an IT environment but require access to sensors, actors and control systems, which are located in an OT network. In order to fulfill these requirements, converged networks integrating IT and OT are needed.
In the context of Time Sensitive Networking (TSN), standard Ethernet (IEEE 802.1) is being extended with deterministic capabilities. As a result, standard Ethernet will fulfill the requirements of the field level, while at the same time enabling converged networks. The presentation provides an overview of the state-of-the-art real-time communication and analyzes the requirements regarding converged networks. In this presentation, the TSN technology, its current state, key features and use cases are introduced. We also talk about the open-source project AccessTSN, developing a Linux reference architecture including configuration, time synchronization and scheduling. Specific examples of how to use the new features are also given.
From cameras to cloud, Intel offers a range of heterogeneous hardware and software to accelerate development. In this session we’ll focus on the network edge where the action is. From smart cameras, edge servers and intelligent appliances for applications such as industrial automation, public safety, retail and health technology, developers are trying to harness and analyze data at the edge. The presentation will dive into the hardware platforms for AI at the edge, from general purpose Intel CPUs to special-purpose Intel Movidius VPU for vision processing, and share details about Intel’s industry leading software tools, including the Intel® Distribution of OpenVINO™ Toolkit, for accelerating the development and integration of intelligent vision solutions. The session will offer an understanding of the features and capabilities of Intel’s edge AI portfolio.
In the last decade, the world of computing has shifted from a rigidly defined client-server model to one to where the majority of electronic devices are considered to be computers, regardless of form factor or purpose. The ubiquitous nature of this change is popularly known as the Internet of Things (IoT), and it brings along a new wave of business challenges. The traditional, monolithic product development is often distributed across multiple projects and constrained by the disparate cycles and priorities of these different components. Strong interdependencies and blurred boundaries in the edge device stack result in fragmentation, slow updates, security issues, increased cost, and reduced reliability of platforms.
With the IoT shift in mind, this presentation introduces and showcases a DevOps-driven model based on Ubuntu Core - a transactional, hardened version of the Ubuntu operating system - and Snaps - self-contained and isolated applications with automatic updates, designed to offer a scalable alternative to the existing paradigm, with focus on reliability and accelerated development cycle.
This presentation provides an overview of the development of intelligent voice data analysis from a machine learning (ML) perspective - a historical, state-of-the-art overview and a peak into some future trends in the field of artificial intelligence. The session focuses on some areas within the voice recognition domain which seem to be important to apply ML in medical diagnosis. It also describes a recently developed method of detecting respiratory problems quickly, by recognizing the changes in voice over time using ML algorithms.
In the IoT world, where multiple devices, from small drones to autonomous vehicles aim to mimic and replace human localization and navigation, achieving 6 Degrees of Freedom (6DOF) tracking is highly important.
The Intel® RealSense™ Tracking Camera T265 provides an all-inclusive high-quality tracking solution that input signals through Simultaneous Localization & Mapping (SLAM) algorithm on an Intel® Movidius™ Myriad™ 2 VPU. The power-efficient camera consumes less than 1.5 Watt, features a small form factor and provides low latency poses at a sampling rate of 200Hz, making the camera platform agnostic.
First, we’ll present the use of the T265 with the Depth Camera D435 on a mobile robot for occupancy mapping, path planning, obstacle avoidance and autonomous navigation. The demo uses the open-source Realsense SDK & the corresponding Robot Operating System (ROS) packages. Second, we’ll showcase the use of the T265 along with the D415 Depth camera in an AR application that mimics a technician's lab.
Advantech is the leading IoT solutions provider integrating IoT building blocks from computing platforms, connectivity and sensing/control perspectives. Customers can easily accelerate the IoT implementation by using these building blocks with Advantech WISE-PaaS Edge Intelligence Cloud Platform. In this session, Advantech will introduce an IoT strategy and edge intelligence that will collaborate with Intel in MRS and RRK development and related use cases. From this session, audiences can learn how Advantech can help them jump over the chasm and embrace the digital transformation.
Industrial IOT and today’s autonomous environments require faster and more efficient engineering processes with reduced costs. Smart Factories and Industrial Systems of the Autonomous future world will provide superior operations and new markets with the business value of making things smart and intelligent, enabling safe, efficient and comfortable living. Smart Factories and Automotive IOT systems are predicted to be the fastest growing sectors of IOT. Some of the challenges include standards, security, driving safety, approaches for superior monitoring and the required Machine Learning cycles.
IOT has limitless potential with the use of sensors to collect data, use of location systems and RFID to monitor all kinds of situational data. Today, Cameras and Video Technologies are relatively inexpensive and widely deployed and along with Object Detection technologies can be used to answer a variety of questions.
Computer vision workflows and OpenVINO toolkit provides visual inferences using neural network optimization including support for deep learning algorithms that help accelerate smart video applications.
We discuss best practices for industrial and automotive applications including deployment and monitoring to increase flexibility, scaling, and simplicity for end-to-end applications. These involve developing middleware solutions for smart mobility networks, enabling next generation technologies and associated Software-as-a-Service and Platform-as-a-Service infrastructure for rapid IOT Deployments, and infusing AI and Machine Learning systems. Realizing smarter Industrial IOT requires optimization and improved performance using frameworks and topologies supported by accelerator tools with major advantages. These provide tremendous insights into options for overall performance improvements and accelerating adoption.
Computer vision-based solutions utilize enhanced deep learning neural networks that allow data to be collected in more sophisticated ways, taking analytics to the next level: nonlinear, contextual, and accessible from multiple vantage points. Intel is leading the evolution of edge compute and computer vision solutions, helping organizations unlock new possibilities for their data with a comprehensive stacks of products designed for AI.
Accelerating Time-To-Market (TTM) for your applications is critical to business success. Ensuring that you have a proven path to deploying your deep learning models in the field is therefore imperative. In this session, you will learn about the various options you have available through Intel for deployment to the edge - CPU, Integrated Graphics, Intel® Movidius™ Neural Compute Stick and FPGA. Using the Intel® Distribution of OpenVINO™ Toolkit and an Image Classification example, we will show you how to build hardware agnostic Intermediate Representations (IR) that you can then seamlessly deploy to multiple edge devices. Come join us for this exciting session!
As almost every object in your home and office become potentially internet-enabled, the IoT is poised to apply a major stress to the current internet and datacenter infrastructure. The popular approach is to centralize cloud data processing in a single site, resulting in lower costs and strong application security. However, with the sheer amount of input data that will be received from globally distributed sources, this structure will require some backup. Also, in most cases, enterprise data is pushed up to the cloud, stored and analyzed, after which a decision is made and action taken. This system is not efficient, and to make it so, there is a need to process some data in IoT case in a smart way, especially if it’s sensitive data that needs quick action.
IDC estimates the amount of data analysed on devices that are physically close to the IoT is approaching 40 percent, which supports the argument for a different approach to smart data processing. Edge/fog computing is the ideal solution to this challenge as it allows computing, decision-making and required action to happen via the IoT device itself, and pushes only relevant data to the cloud.
In this session we will show how to build and deploy edge computing microservices with Eclipse ioFog. We will use OpenVINO™ on a Raspberry Pi to make some high-value microservices that perform computer vision accelerated by an Intel® Movidius™ Neural Compute Stick. Eclipse ioFog gives developers a standard way to package their software for the edge and manage it just like cloud microservices. This deep dive session will highlight the architectural approach and benefits that OpenVINO™ developers can take advantage of immediately in their work.
Determining if a deep learning model designed for computer vision will meet expectations when it is deployed is difficult and time consuming with existing tools and practices. Today, developers must run iterative experiments with command-line tools or custom scripting to identify throughput, latency and accuracy trade-offs. In this presentation, developer experience professionals at Intel will illustrate how a research and DX design process can be applied to create innovative deep learning solutions for computer vision. You'll have an opportunity to preview a design prototype that simplifies the process of performance profiling and tuning of deep learning models for use with Intel’s OpenVINO™ Toolkit. You'll also learn how your organization can apply DX best practices to improve customer experiences with your own products.
In this talk, we will cover how industrial processes are transforming from static process based operation technology to increasingly computerized industry 4.0 processes. We will discuss the concept of computer “workloads” and how they are replacing current factory processes. We will also describe how Kubernetes can be used to manage the deployment of these processes in a tested, automation pipeline.
Learn about EdgeX - the open platform for the IoT edge. In this session, Michael Estrin (Dell) and Beau Frusetta (Intel) will discuss getting started with developing on EdgeX. There will also be a demo of how EdgeX is used to run on an Intel-powered Dell Gateway, which is connected to several cameras for surveillance purposes.
The session will showcase insights into the drivers, solution, and benefits of real-time edge intelligence in a gas refinery, leveraging FogHorn video analytics and machine learning software, combined with ADLINK hardware.
FogHorn, a leading developer of “edge intelligence” software for industrial and commercial IoT solutions will provide a detailed overview on how leading global producers of energy and chemicals can use an Intel RRK Solution including FogHorn’s software for edge computing, real-time analytics, AI and machine learning (ML), combined with ADLINK hardware, to drive enhanced industrial performance for a variety of realworld, industrial use cases.
This session will discuss and demonstrate use cases of flare stack monitoring in a gas refinery, in combination with leading Industrial IoT Cloud solutions. The discussion will break down the challenges leading Oil & Gas companies face with flare stack monitoring, including the number of stacks monitored, limited communications, constrained compute resources, environmental and regulatory compliance goals and tightening maintenance budgets. The session will also share lessons learned to date, and the benefits edge-based analytics, AI and ML technologies offer to improve operational efficiencies, lower maintenance costs, enhance compliance-related activities and improve safety.
With both Facebook and Google's recent shift in direction towards a "Future is Private" world, learn how you too can train and deploy your AI models in a privacy-preserving way, with Decentralized AI and a combination of AI and Blockchain. These techniques will become even more rampant as we move into a world where users will own their own data and companies will start using “ethically sourced data” and move towards a path for Ethical AI for the IoT space.
In this session, you will learn:
Stewart Christie has been teaching classes featuring the Intel® Distribution of OpenVINO™ Toolkit for over a year, and has gathered a set of frequently asked questions (FAQs) and customer feedback from his workshops. The session will highlight the answers to these questions, and some use cases - both good and bad - for AI and computer vision.
Whether an industrial equipment company is building devices for manufacturing, energy, medical, transportation or another segment, they experience a dramatic digital transformation from a focus on hardware technology to a new modern focus on software technology that allows the consolidation of industrial systems. Two software technologies are helping to bring this digital transformation to the latest design of industrial equipment and devices: virtualization and containerization. Discover how these two technologies are enabling the consolidation of industrial systems and driving the digital transformation of the manufacturing, medical and rail transportation market segment through use cases.
A rapid growth in the service industries and the pressing need to adapt to the changing demand of the consumers, market challenges and a rapid advancement in technology with a backdrop of cut throat competition are the factors responsible for the growth of the professional services robots market. With the advent of an era of robotic operating system (ROS) which is open source and artificial intelligence (AI), to react to the rapid demand in the market. ADLINK's ROS 2.0 development platform comes with not only the hardware, but also we provide customers an ADLINK commercial ROS 2.0 SDK called Neuron SDK. This ROS 2.0 development platform has three specification of controllers to fulfill different robotic applications. And the best thing is, ADLINK has already built ROS/ROS 2.0 and DDS environment for the customers with Neuron SDK. Once the customers got the ROS 2.0 development platform, they can use our library or API to control robot easily. It means that customers can not only enjoy all the ROS open-sourced advantages but also get the advanced DDS implementation for real-time communication. It could help customers to reduce development costs and boost the time-to-market.
We go beyond the barcode to talk about machine vision and machine learning (ML) with Edge IoT- the combination of edge computing and IoT technology, which is used to prevent “missing in action” items in inventory, intercept shipments before heading to the wrong destination, automate parts picking and more, while creating instantaneous communication among the manufacturing floor and related systems. We’ll discuss ML basics, how vision ML is being used in manufacturing, how ADLINK Edge with AWS IoT Greengrass uses Intel IA for ML at the edge, and tips to getting started. Hear from ADLINK Technology, the leaders in edge computing, and Amazon Web Services (AWS), the leaders in cloud computing, to learn how easy it is when they come together with Intel for vision-based manufacturing initiatives.
Topics: Edge IoT for vision ML basics & tips. Different ways vision ML is being used in manufacturing including smart pallets, automated parts picking, anomaly detection and robotics. How ADLINK Edge with AWS IoT Greengrass ML is leveraging IA OpenVINO and Movidius Myriad-X to accelerate AI vision at the edge.
While traditional sports events have successfully dominated a segment of the entertainment industry for decades, eSports has become a global phenomenon recently, with its global market revenue reaching $1.1 billion in 2019, marking a 27% annual growth. By 2021, eSports is expected to have 84 million viewers in the US alone, which is higher than the 63 million NBA viewers. Watching eSports is no longer about the live experience alone. Technology with a human-centered design focus is transforming the overall gaming industry into end-to-end experience, engaging audiences both online and offline every step of the way.
In this session, an immersive eSports arena solution with AIoT (Artificial Intelligence of Things) technology is presented with three focus areas:
Intel RFP Ready Kits are commercially hardened to make it easier and faster for enterprise developers to deploy and scale Edge, Machine Learning and AI solutions. More than 140 RFP Ready Kits and derivative Market Ready Solutions are available today, spanning industrial, retail, transportation, Smart Ag (smart agriculture) and energy segments. The goal of the program is to provide customer ready solutions (proven hardware + software, tools, SDKs, regional and global distribution) to accelerate deployment of advanced capabilities such as Computer Vision using Intel’s OpenVINO Toolkit. Chetan will describe the program and highlight solutions that are part of this year’s Global IoT DevFest – so get ready to jumpstart your deployments today.
Connectivity to all devices and intelligence at the edge brings us to the point where we are able to move forward to the design of autonomous robotics. Autonomous robotics could be widely used in research, retail, rescue, warehouse, exploration and service. However, there are complex puzzles in hardware and software to be put together. This involves vision, motor, navigation, simulation and more. How can we make it easier for developers and show a clear path from development to deployment? Join this session to learn more.
The technological development in the UAV sector has been enormous in recent years. The platforms and hardware/software for UAVs have become more abundant, offering more choices and enhanced performance. Taking this growth and available facilities into consideration, we have attempted to use quadcopters towards enhancing/increasing the efficiency in inventory monitoring in warehouses through a cost-effective approach.
We have learned that there is a lot of loss because of inefficient monitoring techniques, which mostly involve human resources. Since manual verification is both time-consuming and error-prone, we used drones, data analytics, Android and IoT to simplify the process. This approach was found to be affordable, accurate and viable. Our drone would fly inside a warehouse, track the goods and components, and give alerts/updates to the store manager through a web interface/Android application we have developed. This way, we can track every box in the warehouse, while eliminating the chance for it to be lost.
Computer vision and deep learning technologies are now widely used in multiple IoT segments. Intel has a complete portfolio for computer vision, from world-class silicon to software toolkit. The Intel® Distribution of OpenVINO™ Toolkit can help customers to unleash the full potential of Intel platforms in computer vision solutions.
（全链路人机对话技术）Full-Chain Technologies: Based on a large amount of acoustic and text data, we provide a comprehensive conversation service, which is mainly task-oriented but also including chat and Q&A.
（超高度定制能力）High Customizability: We provide abundant built-in skills, support customized dialogical logic and content, as well as ultra-high customization of turn-level interaction:启发式对话）Heuristic conversation（复杂知识管理）Complex knowledge management.
The session will cover use cases, technologies and development tools to enable vision-based solutions aimed at improving well-being and safety for professionals.