Intel® is pleased to present the Intel® Innovation conference, September 27-28, 2022, an event designed to inform and ignite the imagination of developers and other tech leaders. We are also proud to have Intel Labs lead our Innovation for the Future track.
This exciting track offers developers and tech experts an insider’s look at the next generation of hardware and software solutions—several of which promise greater access and affordability. Participants will be introduced to the latest advancements in quantum computing, AI, machine learning, and more. You'll also get a sneak peek at the latest multimedia and virtual reality technologies, assistive computing, and Wi-Fi respiration sensing, just to name a few.
With a total of eight researcher-led sessions and fifteen interactive demonstrations, this is a can’t-miss event for those who simply must be in the know about the latest developer tools and technology.
To receive more information about the Intel Innovation event, go to Intel® Innovation.
The following is a complete list of Intel Labs’ sessions and demonstrations at Intel Innovation:
Intel® Quantum SDK Tutorial: Intro to Quantum Programming Using C++
Dr. Anne Matsuura, Senior Principal Engineer and Director of Quantum & Molecular Technologies, Intel Labs
While advances in qubit draw mainstream attention in the quantum computing field, quantum researchers know that to eventually attain quantum practicality, significant breakthroughs are needed across the full hardware and software stack.
To advance this journey, Intel has developed a full-stack Software Development Kit (SDK), called the Intel Quantum SDK, which includes an LLVM-based C++ compiler and system software workflow. This architecture enables the usage of a single self-contained source file, consisting of both the classical optimizer and the quantum program. These are compiled to a single binary file to execute quantum algorithms on simulated qubit hardware.
In this hands-on interactive lab, members of Intel Labs’ Quantum Applications & Architecture led by Senior Principal Engineer, Dr. Anne Matsuura, will introduce the Intel Quantum SDK and demonstrate its use, including the execution of popular quantum-classical chemistry algorithms on a full-stack simulator.
Neuromorphic Computing: Solving Large Scale Problems with Loihi 2 and Lava
Tim Shea, Neuromorphic Research Scientist, Intel Labs
Danielle Rager, Neuromorphic Algorithms Research Scientist, Intel Labs
Most of the excitement surrounding neuromorphic computing is related to its potential use for autonomous systems and robots. However, emerging research from Intel Labs suggests its potential for solving large-scale problems for industry, including data center optimization, wireless infrastructure monitoring, and more.
Intel’s most recent architectural innovations make it easier than ever for developers to leverage neuromorphic technology for this type of problem-solving. In this session, we’ll introduce these architectures, including (1) Loihi 2, Intel’s second-generation neuromorphic research chip (2) Kapoho Point, a stackable eight-chip system for building neuromorphic applications, and (3) Lava, an open-source software framework.
Tim Shea and Danielle Rager, researchers from Intel’s Neuromorphic Computing Lab, will demonstrate how to build neuromorphic applications to accelerate AI and optimization workloads. They will also show how to map a Lava application to Loihi 2 and solve nondeterministic polynomial (NP) problems.
Responsible AI: The Future of Security and Privacy in AI/ML
Jason Martin, Principal Engineer in Intel’s Security Solutions Lab
A thoughtful, end-to-end approach to security and privacy is a non-negotiable aspect of AI and machine learning. But what does that look like? And what is Intel doing to provide a more trustworthy future, while paving the way for pioneering research in healthcare, science, and business?
Maintaining secure data is not just a matter of software encryption. Hardware systems also play an important role. In this session, you will learn what Intel Labs is doing to protect the future of data, even as it is being shared for the common good.
Jason Martin, principal engineer in Intel’s Security Solutions Lab, will help explain the potential vulnerabilities of machine learning and what Intel is doing to shore up security and privacy against future data breaches.
Next-Gen Edge Services and Private 4G/5G Network Deployment
Ravi Iyer, Intel Fellow, and Director of Intel's Emerging Systems Lab
Enterprises are increasingly interested in establishing private networks to securely deploy edge services and improve their performance by lowering latency and jitter. Aether is an open source 4G/5G private network platform from the Open Networking Foundation (ONF) community that helps streamline the deployment of a private network and edge services.
In this session, we’ll first introduce Aether, which was designed to deliver scalable, distributed services in on-premises edge and hybrid edge/cloud infrastructures. We will then describe multiple next-generation edge capabilities and services that we have developed on Aether. Integrated with Intel® Smart Edge Open software, Aether fast-tracks deployments and simplifies the operation of single or multi-edge private 4G/5G networks for enterprise applications (e.g., industry 4.0). It also supports 5G features like network slices and associated QoS.
Speaker Ravi Iyer, Intel Fellow, and Director of Intel's Emerging Systems Lab will demonstrate how multiple next-generation edge services can be developed and deployed on Aether. These services include visual data management systems, anomaly detection, and interactive/immersive capabilities.
Hardware-Aware AutoML: Using AI to Optimize AI
Nilesh Jain, Principal Engineer, Intel Labs
The productivity bottlenecks associated with scaling AI and machine learning (ML) can be a real buzzkill when it comes to the deployment of AI in production to meet performance KPIs. Mapping and optimizing AI models on emerging AI platforms using traditional manual techniques is painfully time-consuming and demands highly skilled labor.
In this discussion, we will introduce hardware-aware automated machine learning (AutoML) research to address AI scaling and efficiency challenges thru automation. We will share our approaches to AI automation and optimization that have the potential to improve productivity by 100x and/or improve performance by 2-3x. These include (1) AutoQ for automated mixed precision quantization of AI models, (2) BootstrapNAS for automated design and discovery of platform-optimized AI models, and (3) AI QoS for dynamic optimizations of AI models using Intel RDT, and future research direction to achieve deterministic AI performance.
Nilesh Jain, Principal Engineer at Intel Labs, will discuss how these emerging hardware-aware AutoML technologies will speed up machine learning productivity and performance while reducing reliance on deep learning experts.
Heterogeneous, Distributed Computing and the Future of Programming
Tim Mattson, Senior Principal Engineer at Intel Labs
The future is heterogeneous. A specialized processor doing work aligned to its architecture delivers the best performance/watt. The future is distributed. Workloads need distributed computing to meet computational and I/O throughput requirements. Hardware changes rapidly compared to the lifetime of software. High velocity, high variability, parallelism … what’s a programmer to do?
The key is one codebase for all processors. This is the goal of Intel’s oneAPI initiative. In this talk, we explore the complexity of future systems and the adaptations needed in our programming environments. We start with oneAPI and add abstractions for distributed computing. We then explore a more distant future and how automation is needed to ensure that general-purpose programmers thrive in a world of parallel distributed computing.
Tim Mattson, Senior Principal Engineer at Intel Labs and widely published author on the subject of parallel computing, will discuss the fast-moving world of computing and Intel’s work to support abstractions that help programmers keep up.
Cognitive AI: Architecting the Future of Machine Intelligence
Ted Willke, Senior Principal Engineer at Intel Labs and Director of Intel’s Brain-Inspired Computing Lab
Human-centric, cognitive AI is the future of machine learning. By 2025, machines are expected to make great advances in understanding language, integrating commonsense knowledge and reasoning, and autonomously adapting to new circumstances.
A key tenet of this evolution is multimodal cognition, the ability for machines to acquire knowledge from a variety of inputs, understand the world, and apply reasoning, thus mimicking how humans learn from their environment. Multimodal cognition will bring machines one step closer to human-level performance in a variety of real-world applications that demand deliberation.
Ted Willke, Senior Principal Engineer at Intel Labs and director of Intel's Brain-Inspired Computing Lab will discuss how cognitive AI research is advancing the third wave of AI by building systems that incorporate three levels of knowledge: procedural, conceptual, and retrieved external knowledge. You'll learn to envision AI as a co-collaborator with humans as we navigate the many challenges of life and work.
Accelerating AI Performance for Transformer Models on Intel Xeon Platforms
Moshe Wasserblat, Research Manager for Natural Language Processing at Intel Labs
Transformer models enable many exciting and creative applications, from vocally or textually instructed website design to the generation of code, music, and art. However, deploying large transformer models in production environments without losing performance is challenging for most organizations.
Contrary to the assumption that large transformer models must deploy in production and run on GPUs to achieve high accuracy, recent research shows that developers can achieve comparable results running transformer models on Intel® Xeon® Scalable processors. Intel has partnered with Hugging Face to develop Optimum Library, an open-source extension of the Hugging Face transformer library, which provides access to performance optimization tools for efficient training and interference when running transformer models on accelerated Intel Xeon CPUs.
Moshe Wasserblat, Research Manager for Natural Language Processing at Intel Labs, will introduce some of these optimization techniques, including sparsity, quantization, pruning, distillation, and more. He’ll also discuss how Intel’s Optimum Library simplifies transformer deployment and how Intel Labs' partnerships with Deci.ai and Neural Magic are helping to deliver faster, more cost-effective use-cases on Intel Xeon.
Beyond 5G: AI in Next Gen Wireless Networks
Nageen Himayat, Senior Principal Engineer, Intel Labs, Security, and Privacy Research
As demand for advanced connected services and devices grows, wireless networks must evolve to support massive increases in throughput, density, and latency, as well as increasingly complex applications. Intel and the National Science Foundation (NSF) are currently exploring ways to address these challenges with AI/ML techniques.
The Intel-NSF partnership on “Machine Learning for Wireless Networking Systems (MLWiNS),” aims to accelerate research that leverages AI/ML techniques for the design and architecture of Next-Gen wireless networks and natively supports AI solutions and services over such systems. In this session, we’ll discuss some of the groundbreaking research that the MLWiNS program has yielded--with the help of 21 top academic institutions--to advance AI solutions in Next-Gen wireless systems.
Nageen Himayat, Senior Principal Engineer within Intel’s the Security and Privacy Research Lab, and Shilpa Talwar, Fellow & Director, Wireless Systems Research at Intel, will share some of the insights from MLWiNS and what impact this research is expected to have on developing Next-Gen network standards.
Demonstrations
ANYmal Robot: Reinforcement Learning for Navigation of Complex Terrain
Meet ANYmal, a quadrupedal robot designed for autonomous operation in challenging environments. ANYmal incorporates exteroceptive and proprioceptive perception to achieve robust navigation and has been successfully tested in various hazardous environments, including an hour-long hike in the Alps. It is the product of an Intel-funded, multi-year collaboration with ETH Zurich.
Respiration Detection Using Wi-Fi Sensing
Respiration monitoring provides valuable data but typically involves wearables and/or other hardware that make continuous or frequent monitoring impractical and expensive. In this demo, Intel Labs introduces an alternative method—respiration sensing via Wi-Fi. This device-free method uses a real-time respiration/presence detection algorithm to interpret disruptions, such as those caused by breathing.
Assistive Computing: AI Response Generation and Brain-Computer Interface
Intel is expanding the Assistive Context-Aware Toolkit (ACAT) capability to better support speechless and touchless computing. In this demonstration, we'll introduce our latest assistive computing modules, including gaze tracking, word prediction, and pre-trained response generation. Using sophisticated UI, gaze tracking, and word prediction modules, we have addressed some of these challenges in the past. In this work, we are extending the capabilities of ACAT with spoken interaction support.
Immersive Collaboration with Visual Sensing and Mixed Reality
This augmented reality slideshow features an extraordinary line-up of new, collaborative tools for virtual meetings, telegaming, etc. Watch the presenter finger-write notes on a virtual whiteboard, while a 360-view, real-time video captures his every move inside a virtual environment. Experience the heightened reality of gesture-driven interaction, novel angle-of-arrival assisted voice detection, and more.
Enhancing Photorealism in 3D Simulations for Training Autonomous Machines
Photorealism elevates the quality of 3D simulators, gaming, and the metaverse. In this demo, Intel Labs showcases new neural graphic enhancement technologies from Intel Labs on CARLA, the top open-source simulator for autonomous driving research. Before and after enhancement images will be displayed on a large screen so audiences can see the remarkable results of this innovative technology.
AI for 3D Content Creation: Image Generation and Style Transfer in Blender
Visual effects artists can spend hours perfecting a single 3D model or artifact for movies, games, and virtual worlds. By using AI in Blender on an Intel platform, they can do the same work in a fraction of the time. This includes time spent sculpting models and applying generative styles to 3D objects. In this demonstration, speakers will transform an image into a 3D artifact using AI in Blender.
Cognitive AI: Applying Multimodal Understanding to Video Search
Cognitive AI opens a world of possibilities for multimodal understanding. Tomorrow's multimodal machines and supporting architectures will integrate neural networks and structured knowledge. They will be capable of symbolic reasoning, broad information extraction, and more. In this demo, the speaker will show how multimodal systems, optimized for Cognitive AI, can enable video understanding.
Trusted Media: Real-time FakeCatcher for Deepfake Detection
The dramatic rise of deepfakes is diminishing trust in online resources. Intel is working to counteract this trend with FakeCatcher, an AI-based tool for detecting fake media. In this demonstration, we will introduce Fakecatcher, now optimized for real-time deepfake detection and enhanced with Intel deepfake detection algorithms. We'll demonstrate its fine-tuned capability with streaming video that differentiates authentic media from fake media in real-time.
AI Robotics: Human-AI Collaboration Plus 3D Environment Mapping
Find out how Intel Labs is advancing the future role of robotics in low-volume and high-variability manufacturing processes. This live demonstration showcases novel sensor-driven algorithms that enable human-robot collaborative task execution. Audiences will also learn how Intel is using LiDAR and robot-based digitalization to improve 3D environmental mapping.
Future of Telepresence: Variable Viewpoint Capture, Rendering, and Streaming
Variable Viewpoint Video (VVV) heightens the multimedia experience by creating an almost holographic telepresence for users. The technology on display in this demo uses a multi-camera array and gaze tracking to provide a more realistic spatial experience for virtual conferencing, video production, and more. Best of all, it only requires a high-end laptop for users to enjoy and only a VVV camera system for video production. Audiences will see a fully functional, interactive PoC, complete with video capture devices, high-resolution cameras, and microphones.
Intel Quantum SDK and 3D Interactive Demo of Intel’s Quantum Hardware
This demo highlights a beta preview version of the Intel Quantum Software Development Kit (SDK), which lets users interface with Intel's Quantum Computing stack. The SDK includes an intuitive user interface based on C++, an LLVM-based compiler toolchain adapted for quantum, and a high-performance Intel Quantum Simulator (IQS) qubit target backend. The demo will also feature a 3D interactive look at Intel’s quantum hardware.
Neuromorphic Computing: New Development Board and Lava Software Framework
This demo introduces Intel’s latest neuromorphic innovation, Kapoho Point. This stackable system features eight Loihi chips in a compact 4x4-inch form factor with an Ethernet interface. We will showcase the new development board and demonstrate a basic workload on a functional neuromorphic system with a Lava software framework.
Security in AI: Defending AI Models from Adversarial Attacks
Safe deployment of ML-based cyber-physical systems requires robust protection against adversarial manipulation. In this demo, we'll show examples of adversarial-type inputs and how simulation toolkits developed under DARPA's Guaranteeing AI Robustness Against Deception (GARD) program can help developers evaluate the robustness of their models before deployment in safety or security-critical environments.
Next-Gen Edge Services and Private 4G/5G Network Deployment
The demonstration introduces Next-Gen Edge services running on Aether, an open 5G private network optimized for AI. Audiences will be able to observe the real-time operation following Aether-connected edge services, including (1) Visual Data Management Systems, (2) Probabilistic Anomaly Detection, (3) Cloud Gaming, and (4) Generic Multi-Access with heterogeneous wireless access networks.
Hardware-Aware AutoML: AI Automation for Model Optimization
This demonstration will introduce groundbreaking research that addresses the challenges of scaling AI, delivering compelling improvements in productivity and performance. We’ll show how Auto Q automatic mixed-precision quantization and BootstrapNAS HW-aware model optimization can help accelerate deployments while reducing reliance on deep learning experts.