Watching A Replay
To access the replay sessions(s) click “View Archive” next to the desired session below. For your convenience and ease of locating sessions of interest to you, we recommend using the event Filter by Date or Track which will allow you to sort sessions by conference/event track.
Thanks to a rich app ecosystem, mobile OSes such as Android* have spread to non-smartphone usages. For example, Android is employed for interactive devices in connected cars, supermarkets and electronic classrooms. These real-world use cases currently require the installation of multiple or even many standalone mobile devices to provide uncompromised experience for every user. Meanwhile, computing capacity grows quickly even on relatively low-cost computing platforms. To reduce the total cost of ownership, it’s natural to ask: Iis it possible to host multiple simultaneously interactive physical mobile clients on a shared single computing platform?
This session introduces AIC, a system to support multiple simultaneously interactive physical Android clients for an uncompromising user experience on a single computing device. AIC offers every client a full mobile OS instance that is virtualized based on container-like OS virtualization technology. Every client features fully accelerated 3D graphics, standalone I/O peripherals such as touch, audio, and display.
Take a detailed look at our AIC prototype that supports multiple simultaneously interactive Android clients on one low cost x86 platform. AIC may be the first system to enable a full Android experience on multiple interactive clients running on one computing platform. It’s also worth noting that AIC obeys a noninvasive design principle: It keeps its extensions within the bounds of the hardware abstraction layer, with no modifications to the kernel and only minimal changes to the mobile OS framework. This makes AIC extremely easy to maintain against any quickly evolving upstream kernel and Android source tree. See performance evaluation results that demonstrate AIC’s superiority to alternative VM-based solutions.
This session features a deep dive into Project Celadon, an open-source Android* software stack for Intel® architecture. Learn how Celadon integrates Google’s Neural Networks API and the Intel® Movidius™ Neural Compute Stick (NCS). And, watch as we demonstrate how you can accelerate Deep Neural Networks inferences via the NN API using NCS.
This session looks at ways to resolve the pain points that affect express delivery terminal technology. We’ll evaluate the market size of intelligent self-express service machines in China, present a profile of Jiangsu Cloudbox Networks Co., Ltd., and demonstrate the value and future of intelligent self-express service machines.
This session provides an overview of the latest code optimizations possible with the tools and libraries in Intel® System Studio. Use them to help speed development of system and IoT device apps.
You will learn:
• Intel® C++ Compiler optimizations specifically that can take advantage of SIMD features in the latest hardware
• OpenMP threading + SIMD
• How to use Intel® Threading Building Blocks (Intel® TBB) for more advanced task based threading
• Intel® Integrated Performance Primitives (Intel® IPP) for signal, image, media, and encryption
• Intel® Math Kernel Library (Intel® MKL) for linear algebra, FFTs, vector math, and statistics
• Intel® Data Analytics Acceleration Library (Intel® DAAL)
• Intel® Distribution of Python*
By the end of this decade, the pace of AI advancement will continue to accelerate as we approach general intelligence, and the beginning of true autonomy.
This session traces the evolution of edge AI and different frameworks such as the Deep Learning inference toolkit, OpenVINO® toolkit, and Intel-optimized TensorFlow with Object Detection. We'll identify new developments that are transforming Edge AI, and cover topics that encompass Deep Learning Inference engine, OpenVINO toolkit, and Object Detection with TensorFlow.
This session provides a brief introduction to ReadSense technologies and industry solutions. And, we look at the collaboration between ReadSense and Intel through its use of Intel® Movidius™ technology.
In this session, learn how the Internet of Things is creating a lasting impact as you explore use cases both current and future. Throughout your waking day and into the night, the IoT is adding value at work, in the home, across your community and beyond. Products, services and processes are continuously being optimized through a world of connected technologies.
Human lives are priceless. Through advancements in military technologies, many services become automated – but still, human soldiers are needed to fight and protect borders in dangerous territories. Even with night vision devices, troops are limited in their ability to detect infiltration at long range. Drones are conspicuous and attract attention.
This session proposes a bold new approach: an army of intelligent miniature surveillance devices that can provide location, detect human presence, and report back to base for an immediate trigger. Such devices are capable of communicating with each other using low-power wireless technology and can run for years on a battery. They are capable of identifying movements and sounds (e.g.; gunfire) using pattern matching, and reporting to the base station. This approach could dramatically improve accuracy and reduce the number of bombs fired – a cost-effective tactic that can save precious human lives.
Intel has recently entered the Computer Vision and Deep Learning domain with its OpenVINO® solution. It’s important to understand the performance aspects and other productization challenges of using Convolutional Neural Networks (CNNs) with Intel® technology-based hardware, and how OpenVINO solved them.
In this session, you’ll learn how CNNs are executed on such varied target environments as low-power, always-on embedded platforms or full-blown servers. Deploying CNNs is challenging, as the end target environment typically look very different from the training environment. Training is typically done on high-end data centers, using popular training frameworks such as Caffe* and TensorFlow. Scoring (inference), in contrast, can take place on embedded devices or accelerators like FPGA.
The Open IoT Service Platform (OISP) is an open-source IoT platform for Cloud Service Providers (CSP), System Integrators (SI) and Independent Software Vendors (ISV). OISP provides a complete end-to-end infrastructure for IoT connectivity while maintaining interoperability at the edge, enabling CSPs, SIs and ISVs to join forces to create a compelling solution for the IoT industry.
This session – presented jointly by Intel and 1&1, the leading German CSP – offers an overview of 1&1's Container as a Service platform and how it integrates OISP. You’ll take a deep dive into OISP technology and learn how to deploy it, connecting a device with OISP using the Intel® IoT Developer Kit. And, we’ll describe the open source project structure and how developers can get started by joining the project.
OSIP includes services and protocols for collecting data from IoT devices, performs functions such as triggering events based on rules, manages Big Data Storage and is easily extensible with common analytics platforms and other third-party services. And, it is tightly integrated with the Intel IoT Developer Kit.
In recent months, Blockchain and other distributed ledger technologies have been greatly over-hyped. Yet, they show much potential beyond crypto values and finance.
In this session, learn about a real-world use case for provenance tracking using Blockchain and IoT technologies. See how a traditional production and handcraft industry can boost its value using a state-of-art solution linked with business process Improvements.
Today’s edge computing technologies are designed to operate in very low-power environments with little connectivity. However, if AI algorithms, which typically require very high compute resources, can be designed and optimized to run at the edge in a low-power environment, this creates numerous possibilities for AI-powered IoT applications at the edge.
In this session, the possibilities for AI applications/use cases ranging from smart homes, smart factories to smart vehicles/driver assist, etc. are discussed. In addition, we look at real-world cases of AI/ML enabled through an AI co-processor/accelerator and an NN/Deep Learning algorithms.
Problems with AI at the edge can involve object detection from video, speech/voice recognition, or analyzing input from vibration sensors in machinery, to name a few examples. While some use cases may need compute infrastructure on cloud, others can be suitable for AI at the edge. These can be in surveillance/compliance or a host of other possibilities in smart home/factory/city scenarios.
Real-world use cases involving object detection using CNN on an AI engine in an FPGA processor is examined. And, challenges to implementing solutions to operate within the power/efficiency, latency constraints and FPGA footprint, with no significant loss in accuracy are detailed.
Deploy new application to edge devices is always hard to do if you are not in the same physical network. We now have a simple visual tool to deploy containers to your Edge Devices.
In the future will be to quickly analyze simulated or real sensors data directly in your servers whenever they are, in the cloud or in the field.
Get an up-close look at Intel® System Debugger, a component tool within Intel® System Studio. In this session, we demonstrate the available probes for debugging, including target connection assistant, crashlog viewer, debug system trace, and WinDbg* extensions.
Would you like to build hybrid solutions for the IoT? Do you want to leverage the power of the cloud locally? Or do you still run Microsoft Azure* services and custom applications directly on IoT cross platform devices?
In this fast, deep-dive session, you’ll learn how to install and run applications for artificial intelligence even in disconnected situations. And then check everything centrally from the cloud using Microsoft services and security. And all it takes is Azure IoT Edge … and 45 minutes of your time!
Over the past 10 years we’ve watched cloud computing come of age as more and more companies send their data to the cloud for processing, storage, and management rather than keeping that data on a local server or edge gateway. The benefits of cloud computing are vast, but there’s another key development on the horizon as the Internet of Things matures: edge computing.
This session details the need for data from the intelligent edge, share real-world industrial use cases for making use of data at the edge, and communicate the benefits of a proper edge computing framework.
While only 10 percent of enterprise-generated data currently is processed at the edge, Gartner predicts that by 2022 that figure will reach 50 percent. Half of all processing power is expected to slowly shift from the cloud to edge devices, leading to IoT projects that utilize the power of both cloud computing and the intelligent edge to make smart business decisions.
Use cases for edge computing range from simple manufacturing projects to larger smart city or extended macro-level projects. The fundamental goal for all such projects is to collect data from a large body of industrial assets, and then put that data to use immediately. Some of the devices are ready for IoT, while others were never designed for IoT projects. The common thread is the challenge of gathering data from those disparate devices, then processing, analyzing and acting on the data right at the Edge of the network or device.
Communication is key for the Internet of Things. Meeting industrial IoT (IIoT) needs – for example, operations technology (OT) installations like motion control – places strict requirements on network determinism. Proprietary industrial fieldbuses that meet those requirements have existed for many years, but we lacked an open network stack that allowed IT/OT convergence and interoperability between different vendor devices. IEEE is addressing this need through its “Time Sensitive Networking” (TSN) working group within the IEEE 802.1 Ethernet standard.
What is commonly termed “TSN” is the hardware implementation of a subset of the growing specifications within IEEE 802.1 TSN, allowing data-intensive and latency-critical traffic to share the same network. For example, IT and OT communications may share the same network and still meet the strict needs for OT traffic determinism.
This session explores the TSN basics and dives into implementation details, utilizing Intel® hardware such as the widely adopted Intel® I210 Ethernet Controller and Intel® FPGA options. We will introduce TTTech’s TSN solution for Intel FPGA, including Slate XNS, a powerful software tool.
Lightweight machine-to-machine (LWM2M) is an innovative device management protocol designed for secure, efficient management of resource constrained devices on a variety of sensor based IoT networks. LWM2M promises up to a 70% reduction in data usage using flexible device management objects for advanced remote management with minimal implementation overhead.
This session provides an overview of LWM2M from a technical perspective, covering the protocol structure and key features. It also provides comparisons into LWM2M versus other protocols like MQTT, and explains its benefits for device management in IoT networks. How LWM2M can be used for application data beyond just device management is examined. Finally, cloud-to-cloud interfaces for creating integrated solutions and provide recommendations as you consider designing and/or deploying LWM2M is discussed.
Connectivity is the core of the Internet of Things, yet numerous connectivity protocols exist for different IoT domains. MQTT stands out with its low-bandwidth, high latency, decoupled applications where systems don't require precise timings. It simplifies IoT application integration for all systems.
This session details the advantages of MQTT and guides you to available software libraries, security applications, real-world use cases, open source applications, and server creation using Docker*.
Great efforts are being made from all corners of the world to "connect everything" as part of the IoT revolution. This creates great opportunity, but it also creates great risk. The obvious concerns are security, but it goes beyond that. How to manage the flood of sensor data? How to aggregate disparate streams of information, whether video or simple CLI responses from a remote device? How to prioritize low-latency requirements correctly? To cache or not to cache - that is the question? The list goes on and on. Up until recently, most of these network appliances tended to reside in secure, temperature-controlled locations. But as the 5G/IoT network expands into new places - the factory floor, outdoor transportation hubs, oil rigs, trains, farms, wind turbines, etc., the need for hardened networking products also grows. Join me as we review the devices that will monitor, secure, aggregate and set policy on the packets that are traversing the IoT. Present by Lanner Electronics - the leader in the design and manufacturing of advanced network appliances and rugged industrial systems.
Intelligent machines are here, rapidly changing the way our society works, including the way we design, build, manage and inhabit the built environment. Huge advances in robotic equipment, construction materials and manufacturing techniques are completely changing the way construction work is carried out, putting the construction industry at the forefront of the Fourth Industrial Revolution.This session explores the technological advances in some core construction activities: concrete 3D printing, robotic bricklaying, robotic welding, and UAV site surveying. In addition, we’ll show how applications of machine learning, augmented and virtual reality and the integration of advanced sensors are disrupting the industry.
FPGAs play a critical role in heterogeneous compute platforms as flexible, reprogrammable, multi-function accelerators. They enable custom-hardware performance with the programmability of software. The industry trend toward software-defined hardware challenges not just the traditional architectures—compute, memory, network resources—but also the programming model of heterogeneous compute platforms. Traditionally, the FPGA programming model has been narrowly tailored and hardware-centric. As FPGAs become part of heterogeneous compute platforms and users expect the hardware to be “software-defined,” FPGAs must be accessible not just by hardware developers but also by software developers, which requires the programming model of FPGAs to evolve dramatically.
This session details a highly evolved, software-centric programming model that enables software developers to harness FPGAs through a comprehensive solutions stack. It encompasses FPGA-optimized libraries, compilers, tools, frameworks, SDK integration and an FPGA-enabled ecosystem. Your training will also include real-world examples using machine learning inference acceleration on FPGAs.
This keynote session is your special introduction to UP AI Edge, the first ultra-compact embedded platform for Artificial Intelligence on the edge powered by the Intel Atom® processor E3900 series (formerly known as Apollo Lake), Intel® Movidius™ Myriad™ 2 technology, and Intel® Cyclone® 10GX device. Witness the benefits you can derive with adoption, the performance you can achieve, and the path to production from lab to field deployment.
The growth of high velocity, real-time digital, audio and video streaming data present new challenges to industrial analytics. Traditional procedural programming isn’t well suited to rapidly changing conditions, multi-stream correlations and inferences. Foghorn’s complex event processor uses a “reactive expression” approach that links the flow of program execution to and within the analytic to the streaming data available to it.
Our language, Vel, provides a syntax that can describe reactions to events in streaming data in a simple and logical fashion. This simplicity reduces the amount of coding required for streaming edge analytics, which also improves its maintainability. The complex event processor, in turn, is tightly integrated with data consumption, publications, and machine learning modules that complete the Foghorn edge computing platform.
The reactive approach enables a host of new benefits for real-world applications such as predictive maintenance, condition monitoring, yield optimization, and anomaly detection.
This session provides insight into:
• Reactive expressions as applied to unbounded streaming data
• The advantages of reactive expressions over procedural programming in industrial edge analytics
• Real-world use cases detailing where the technology is being applied today
Building out a distributed system infrastructure in today’s emerging Industrial Internet of Things (IIoT) landscape means selecting from a sizable list of connectivity and protocol standards. If you’re a developer or system architect, you know many tools and protocols are available to move data around in your distributed application – not to mention the possibility of building out your own custom solution directly on TCP or UDP sockets.
This session details the work done by the Industrial Internet* Consortium (IIC), which recently published its Industrial Internet* Connectivity Framework document. The IIC introduces a specific new network stack, details the requirements for connectivity frameworks in an IIoT system, assesses various standards against those requirements, and suggests four core connectivity standards. We provide a useful summary of their work and offer actionable guidance for making informed choices among the four standards, based on your system requirements.
The Context Sensing SDK for IOT is a Node.js, Go, C#, Java and Python-based framework supporting the collection, storage, sharing, analysis and use of sensor information. Designed to simplify the work of developers, system integrators, and prototyping teams, the framework provides an expandable plugin system for physical and virtual sensors, local and fog sync mechanisms, and general-purpose analysis modules. The code base is designed to scale from Intel Atom® processor-class devices to Intel® Xeon® processor-class devices.
In this session, two practical use cases are discussed: IoTAR: Combining data from IoT devices with the visualization capabilities of Augmented Reality unleashes new possibilities for developers in the Retail, Automotive and Transportation, Industrial Automation and Energy sectors. The IoTAR project shows how devices connected through the SDK can be visualized and controlled through leading AR devices in today's market, giving rich user interfaces in an augmented world. Adaptive Learning: Traditional education systems apply one-size-fits-all learning strategies and cannot be easily modified to meet individual student needs. Technology can help provide personalized learning experiences. Adaptive Learning detects students’ emotional states (satisfied, bored and confused) and behavioral engagement (on-task, off-task) and fuses them to determine each student's overall engagement. Teachers can react and modify their lessons based on real-time engagement data provided via a dashboard.
The IoT in 2018 is beyond the hype curve and we’re seeing a variety of market segments derive business value. As these systems scale in the next few years, a fundamental change in the core plumbing will be required that will dramatically change how IoT devices are designed, deployed and maintained through their lifecycle.
This session demonstrates that understanding and implementing concepts of workload consolidation is key to scaling IoT applications. Workload consolidation has been loosely associated with various cloud technologies and is sometimes bundled under the umbrella of fog computing.
We’ll cover key concepts of workload consolidation, as well as the business drivers and special challenges of implementing solutions at the edge. You’ll come away with a fair understanding of terms, components involved in workload consolidation, and how you should start planning your devices and applications to be "consolidation-friendly." This session is targeted at business executives, system architects and IoT software application engineers who wonder if “workload consolidation” is just another over-hyped phrase.
Security is of critical importance in IoT, and it only grows more critical in Industrial IoT and digital manufacturing, all spanning a diverse landscape: OT networks of PLC, machines and more, to IT networks and business systems, to the cloud, and often including sensor networks running over LPWAN or GSM networks. This session gives you a detailed understanding of SAP's approach to securing such complex landscapes.
Learn to integrate voice-enabled commands with Amazon Alexa* over a map using HERE location services built using AWS Serverless Architecture. This session gives you the ability to create your own unique microservices and then connect them with AWS IoT.
Video analytics is a key element in security cameras. Various computer vision algorithms such as pedestrian detection, facial detection/recognition, object tracking, and object detection are necessary ingredients. With recent advancements in deep learning technology, this is one of the industry’s hottest fields.
This deep-dive training session explains how we have developed and optimized video analytics algorithms for security cameras. Also learn our process for developing an algorithm, which involves more than just training models using Caffe* or TensorFlow – it should start with understanding use cases that affect the nature of dataset to collect, and should be tightly bound with hardware platform to get the best performance.
Transportation plays a significant role in global air pollution, contributing up to 80% of NOx emissions. Policy enforcement is costly and mostly ineffective. Effectiveness of current emission controls cannot be validated due to higher NOx levels for on-road vs. stationary tests. Less pollution could reduce healthcare costs, crime and education performance. The challenge: how to incentivize consumer and commercial adoption?
This session offers a solution: SensorComm’s IoT-based mobile pollution monitoring system (Wi-NOx*) provides real-time measurement and monitoring of NOx emissions. Systems are installed in tailpipes and operate independently of each vehicle. Wi-NOx can identify specific polluters to reduce emissions, create alternative revenue sources for municipalities, and form the cornerstone of a global pollution mitigation strategy.
One of the biggest barriers to scaling an IoT solution occurs before you even power on your device. This session underscores the importance of device management in an IoT deployment and how it can help you future-proof your solution. Learn how you can overcome this and a host of other challenges.
OpenCL is a well-known standard for programming across CPUs, GPUs and other accelerators. A growing need for OpenCL in IoT is driven by increased heterogeneity of the processing elements involved. Modern IoT solutions employ a rich mix of CPUs, GPUs, FPGAs and programmable ASICs making OpenCL an attractive choice.
This session demonstrates the applicability of OpenCL in IoT, focusing on developer tools from Intel. We detail how Intel® SDK for OpenCL Applications product provides a complete toolset for building, debugging and analyzing OpenCL kernel codes across Intel® CPUs and GPUs.
Next-generation IoT workloads, such as machine learning and image recognition, are demanding more computation resources and challenging the way a cloud is designed today. There is an emerging trend to use hardware accelerators, such as GPU and FPGA, to accelerate specific applications, including networking, security and machine learning. Data centers and cloud providers are investing in these efficient hardware accelerators solution and allowing users to access these optimized resources on demand. Given the range of users, from the hardware-savvy to the newcomers, enabling easy access to specialized accelerators while getting the best performance in a cloud environment can be a challenging task.
This presentation will discuss recent developments for hardware accelerator solutions in the open source community, with a focus on FPGAs and Smart NICs based on FPGAs.
We will briefly introduce FPGAs and delineate the cloud usage models for FPGAs. We will then do a deep dive into cloud orchestration frameworks (OpenStack and Kubernetes) and discuss how these frameworks handle devices and resources (for example, OpenStack Nova and Cyborg, Kubernetes Device Plugin). We will identify the challenges and gaps in both orchestration frameworks, which need to be addressed in the future.