Intel's Rising Star Faculty Award program selected 10 university faculty members who show great promise in developing future computing technologies. From projects such as a novel cloud system stack, to ultra-low power computing and memory platforms, to artificially intelligent (AI) systems that learn on the fly, these researchers are building advanced technologies today.
The program promotes the careers of faculty members who are early in their academic research careers and who show great promise as future academic leaders in disruptive computing technologies. The program also fosters long-term collaborative relationships with senior technical leaders at Intel.
The awards were given based on progressive research in computer science, engineering and social science in support of the global digital transition in the following areas: software, security, interconnect, memory, architecture, and process.
Faculty members who work at the following universities received Rising Star awards: Cornell University, Georgia Tech, Stanford University, Technion, University of California at San Diego, University of Illinois at Urbana-Champaign, University of Michigan, University of Pennsylvania, University of Texas at Austin, and University of Washington.
Meet the 2020 Rising Star award recipients:
Christina Delimitrou
Assistant Professor of Electrical and Computer Engineering
Cornell University
As a doctoral candidate in Electrical Engineering at Stanford, Delimitrou developed cluster management systems that introduced a new data-driven approach in cloud management. Adopted by Twitter and AT&T, the system improves the compute capabilities and resource efficiency of cloud systems by an order of magnitude. In her proposed work at Cornell, Delimitrou has identified the next big challenge for cloud computing — managing the increasing complexity of cloud applications and data center hardware in a practical, scalable way. Given the importance of responsiveness and performance predictability for modern cloud applications, she proposes to investigate a novel cloud system stack that leverages practical machine learning (ML) methods to jointly optimize data center hardware and software.
Asif Khan
Assistant Professor of Electrical and Computer Engineering
Georgia Tech
Khan’s research conceptualizes and fabricates solid state electronic devices that leverage novel physical phenomena and emerging materials (such as ferroelectrics, antiferroelectrics, and strongly correlated quantum materials). These devices play a key role in shaping the future of non-von Neumann computing devices and have the potential to overcome what are perceived as the fundamental limits in computation. His work led to an innovative experimental proof-of-concept demonstration of the negative capacitance — a novel physical phenomenon that can lead to ultra-low power computing and memory platforms by overcoming the fundamental Boltzmann limit of 60 mV/decade subthreshold swing in field-effect transistors.
Chelsea Finn
Assistant Professor of Computer Science and Electrical Engineering
Stanford University
Finn’s research focuses on enabling robots and other agents to develop broadly intelligent behavior through learning and interaction. Her work combines machine learning and robotic control, including end-to-end learning of visual perception and robotic manipulation skills, deep reinforcement learning of general skills from autonomously collected experience, and meta-learning algorithms that can enable fast learning of new concepts and behaviors. The goal of Finn’s work is to enable artificially intelligent systems to be generalists, capable of acquiring common sense from many diverse experiences, learning new skills on the fly, and adapting to changing situations. This would have a transformative effect on how and where AI and robots can be deployed in society.
Daniel Soudry
Assistant Professor of Electrical Engineering
Technion, Israel Institute of Technology
Soudry’s contributions address the core challenge of making deep learning more efficient in terms of computational resources. Despite the impressive progress made using artificial neural nets, they are still far behind the capabilities of biological neural nets in most areas — even the simplest fly is far more resourceful than the most advanced robots. Soudry’s novel approach relies on accurate models with low numerical precision. Decreasing the numerical precision of the neural network model is a simple and effective way to improve their resource efficiency. Nearly all recent deep learning related hardware relies heavily on lower precision math. The benefits are a reduction in the memory required to store the neural network, reduction in chip area, and a drastic improvement in energy efficiency.
Nadia Polikarpova
Assistant Professor of Computer Science and Engineering
University of California, San Diego
Polikarpova’s research focuses on program synthesis, which automates low-level aspects of programming. The technology has the potential to enable the next “quantum leap” in software construction — akin to the transition from low-level assembly language to high-level programming languages of today — eliminating whole classes of software errors and security vulnerabilities. Her work explores synthesizing programs from types. Types offer a promising solution to the challenges of specification and scale — they are popular with programmers, they can vary in expressiveness and capture both functional and non-functional properties, and the compositional nature of types can help guide the search.
Bo Li
Assistant Professor in Computer Science
University of Illinois at Urbana-Champaign
Li explores vulnerabilities of machine learning systems to various adversarial attacks, and developing robust learning systems. Her research aims to mitigate the uncertainties and improve safety guarantees for wide industry applications when adopting machine learning techniques. Her work focuses on these areas: (1) identifying learning vulnerabilities and certifying the robustness of machine learning models, (2) improving the robustness of general learning systems with theoretic guarantees, and (3) developing general large scale machine learning pipelines providing provable robustness and privacy. Li’s upcoming projects include robust representation learning, on-chip robust machine learning, in-database robust learning, and simulational inference enhanced robust learning pipelines. Her work has led to a new way of thinking on how to train machine learning systems with high accuracy as well as robustness guarantees.
Baris Kasikci
Assistant Professor of Electrical Engineering and Computer Science
University of Michigan
Kasikci has collaborated with Intel on many of his projects, including heterogeneous architectures, reliability of persistent memory, bug reproduction and diagnosis, defenses against speculative execution attacks, and profile-guided instruction prefetching support for data center applications. To improve reliability and security, he plans to use sophisticated program analyses to recover information about executions that lead to failures in production systems. Kasikci also plans to build tools to ease the development of applications that directly use persistent memory by leveraging program analysis to automatically explore the state space of applications. To improve performance, he is working on a novel profile-driven approach to prefetching instructions.
Hamed Hassani
Assistant Professor of Electrical and Systems Engineering
University of Pennsylvania
Focused on machine learning, information theory, and discrete optimization, Hassani’s work explores how to design intelligent and adaptive systems that can cope with complex and uncertain environments. The emergence of the Internet of Things (IoT) and autonomous systems has led to a new direction of coding called non-asymptotic coding theory. Working with Intel, Hassani has focused on designing these error-correcting codes with the smallest possible length that achieve high rates and high reliability. He recently has developed a new paradigm called model-based robust deep learning. This new framework encompasses a much broader class of robust training methodologies compared to the state-of-the-art. For a variety of natural variations inherent to computer vision applications (such as changes in lighting, weather conditions, and background), he has shown that deep learning models can be robustified to have significantly better accuracies in the presence of natural data variations.
Jaydeep Kulkarni
Assistant Professor of Electrical and Computer Engineering
University of Texas at Austin
Kulkarni founded the Circuit Research Lab at UT Austin and his group addresses many aspects of integrated circuit design, including energy efficient circuits, machine learning hardware accelerators, in‐memory and neuromorphic computing, emerging nano‐devices, heterogenous and 3D integrated circuits, hardware security, and cryogenic computing. Kulkarni has set a mission for his research lab to develop transformative research ideas in the devices and circuits area. With ongoing collaborations with Intel’s advanced memory design teams, his research envisions high‐performance, ML accelerator development leveraging emerging embedded memory technologies. This research could transform the next generation energy efficient ML accelerators targeted for cloud inference/training workloads.
Hannaneh Hajishirzi
Assistant Professor of Computer Science and Engineering
University of Washington
Hajishirzi builds leading edge AI systems that can automatically collect information in multiple modalities, such as text, images, and diagrams, and use this knowledge to answer any question. Hajishirzi’s plan over the next five years is to develop AI systems that represent, comprehend, and reason about diverse forms of data at large scale. Toward this end, she will focus on three innovative research efforts to address foundational problems in AI and natural language processing (1) representation: integrating neural and symbolic representations to encode diverse forms of data into knowledge-aware embeddings, (2) reasoning: enabling interpretable, efficient reasoning across diverse data sources, and (3) scalable real world applications on low-resource devices: designing efficient, scalable deep neural networks that can be deployed on low-resource devices.