Intel’s 2024 Rising Star Faculty Awards Recognize Technical Achievements by 8 Leading Researchers

Highlights:

  • The Intel® Rising Star Faculty Award (RSA) program acknowledges eight early-career academic researchers leading groundbreaking technology research and facilitates collaboration between award winners and leaders at Intel.

  • This year’s 2024 RSA winners are Mor Geva, Gushu Li, Dimitrios Skarlatos, Hari Subramoni, Caroline Trippel, Tiwei Wei, Tsui-Wei (Lily) Weng, and Mengjie Yu.

author-image

By

Each year, the Intel® Rising Star Faculty Award (RSA) program selects early-career academic researchers who are leading advancements in technology research that demonstrates the potential to disrupt the industry. For 2024, eight award winners are being recognized for their novel works in computer science, electrical engineering, and computer engineering.

The program recognizes academic community members who are doing exceptional work in the field with the hope to build long-term collaborative relationships with senior technical leaders at Intel. In addition, award recipients were chosen for their innovative teaching methods and efforts to increase participation of women and underrepresented minorities in science and engineering.

The selected faculty conducted research to find novel solutions to challenges spanning various topics, including artificial intelligence, computer architecture, quantum computing, manufacturing processing and packaging technology, security, and quantum photonics.

This year’s winners consist of faculty members from the following institutions:
 

  • Carnegie Mellon University
  • Purdue University
  • Stanford University
  • Tel Aviv University
  • The Ohio State University
  • University of California, San Diego
  • University of Southern California
  • University of Pennsylvania


The 2024 RSA Winners Are:

Intel’s Rising Star Faculty Award Winners for 2024 (from top row, left): Mengjie Yu and Mor Geva. From second row, left: Caroline Trippel, Dimitrios Skarlatos, and Gushu Li. From bottom row, left: Hari Subramoni, Tsui-Wei (Lily) Weng, and Tiwei Wei.

 

 

Mengjie Yu

Assistant Professor in the Department of Electrical and Computer Engineering
University of Southern California

At the University of Southern California, Mengjie Yu founded the Nanoscale Nonlinear and Quantum Photonics Lab to advance the understanding of nonlinear sciences at the nanoscale and contribute to the realization of next-generation optoelectronic circuits for computing, optical communication, ranging, and metrology. Yu was granted the DARPA Young Faculty award, allowing the development of ultrafast and broadband optical sources to enable information processing with enhanced parallelism, scalability, and precision. Yu’s research aims to realize next-generation fully integrated optoelectronic circuits, unlocking the full power of light by employing multiple functionalities, nonlinearities, and fast control mechanisms to solve the ever-increasing demand for capturing and processing classical and quantum information. In the future, she will continue to explore the frontiers of novel fabrication techniques for hybrid photonic platforms and unconventional materials to engineer light-matter interactions at ultralow optical powers and massive integration of optical and electronic devices on a single chip. Her research focuses on creating highly efficient, scalable, and low-cost photonic devices that can be integrated into existing semiconductor technologies. Notably, her work on lithium niobate (LN) photonics has the potential to revolutionize optical communication systems, providing faster and more energy-efficient data transmission. This research could lead to significant advancements in data centers, telecommunication networks, and beyond.

 

Mor Geva

Senior Lecturer, Faculty of Exact Sciences in the School of Computer Science
Tel Aviv University

At Tel Aviv University, Mor Geva has formed a new research group for Artificial Neuroscience that tackles fundamental problems in interpretability, trustworthiness, factuality, and reasoning of large language models (LLMs). The group develops novel methods for inspecting the functionality of specific components in LLMs as well as tracking the evolution of model predictions. Geva’s recent work with collaborators at Google Research introduced the new Patchscopes framework to leverage the LLM to translate its own hidden representations to natural human language. The group also leverages advanced interpretability methods to study internal mechanisms and knowledge structures in LLMs and investigate their latent reasoning pathways. Some of the most pressing issues with LLMs, such as the generation of factually incorrect text and logically incorrect reasoning, may be attributed to the way models represent and recall knowledge internally. Geva’s research shows that LLMs do not capture and utilize knowledge dependencies well, which affects their latent reasoning abilities and our ability to update their knowledge. Through her research with collaborators at Google on evaluating the knowledge and reasoning performance of LLMs, they found that incorporating answer granularity in the evaluation has revealed a significant knowledge evaluation gap, showing that current protocols underestimate LLMs’ knowledge.

 

Caroline Trippel

Assistant Professor in the Computer Science and Electrical Engineering Departments
Stanford University

Caroline Trippel’s research focuses on computer architecture, promoting correctness, security, and reliability as first-order computer architecture design metrics. In doing so, her work leverages automated reasoning and formal methods techniques to design and verify hardware systems. Modern hardware systems that combine shared-memory parallelism, hardware specialization, hardware/software heterogeneity, multi-tenancy, and hyperscale present a key challenge: How can we enforce high-assurance — correct, secure, and reliable — execution for software? Trippel’s research takes a four-pronged approach to solving this challenge: 1) Designing hardware-software (HW-SW) contracts for precisely and succinctly exposing hardware correctness, security, and reliability guarantees to software; 2) Developing software analyses, parameterized by these HW-SW contracts, to automate the design of high-assurance software; 3) Designing hardware verification methodologies for checking hardware adherence to new and existing HW-SW contracts; and 4) Co-designing approaches for improving hardware verification scalability and high-assurance software performance. Her research influenced the design of the RISC-V ISA’s memory consistency model. Her work has also uncovered bugs in commercial processors (Meltdown/Spectre attack variants), ISAs (bugs in the draft RISC-V MCM), high-level languages (a bug in the C11 MCM), and cryptography applications (Spectre gadgets in OpenSSL and Libsodium).

 

Dimitrios Skarlatos

Assistant Professor in the Computer Science Department
Carnegie Mellon University

Dimitrios Skarlatos bridges hardware and operating systems and delves into the core challenges of datacenter computing, addressing fundamental questions about scalability limitations, security vulnerabilities, and energy efficiency. His past work on memory management has tackled longstanding system design challenges at the interface of OS and hardware, which can severely impede server efficiency. His contributions at the algorithmic, OS, and hardware level have enabled highly efficient virtual memory and memory management for large-scale systems. These innovations have led to major gains in production data centers. Skarlatos’ work further extends into the domain of security at the intersection of OS and hardware. He has uncovered vulnerabilities in the software-hardware interface and has designed comprehensive hardware and OS mechanisms to reduce the attack surface of operating systems. His work has been upstreamed into Linux, targeting containerized environments, and was later adopted by Android. Looking ahead, Skarlatos is pioneering the design of OS and hardware extensions aimed at bridging the semantic gap in data-parallel hardware, such as GPUs. His approach shifts away from specialized runtimes and loosely integrated offload devices, all while ensuring robust security guarantees and maximized energy efficiency.

 

Gushu Li

Assistant Professor in the Department of Computer and Information Science, and Department of Electrical and System Engineering
University of Pennsylvania

Gushu Li’s research is centered on quantum programming language, compilers, and computer architecture. Li has a multifaceted research agenda. He develops high-level quantum programming language design beyond quantum circuits, which will enable efficient and modular quantum application development at a large scale. He focuses on quantum algorithm optimization for key application domains of quantum computing, such as quantum simulation for scientific discovery. He employs compiler optimization for quantum error correction codes on emerging quantum hardware architectures, which is essential for future fault-tolerant quantum computing. He also develops algorithms and architectures for scalable tuning and control of quantum processors, such as quantum dot devices. Recently, his group developed an SAT-based compilation framework that can generate the optimal fermion-to-qubit mapping, reducing the overhead in the subsequent quantum Hamiltonian simulation. His group also developed compiler optimization algorithms for bosonic (continuous-variable) quantum computing with new and creative decomposition algorithms for the linear interferometer. Additionally, his group collaborated on the development of new algorithms and architectural support for quantum device control and tuning up, especially for silicon quantum dot processors.

 

Hari Subramoni

Assistant Professor in the Department of Computer Science and Engineering
The Ohio State University

Hari Subramoni’s research has focused on high-performance computing (HPC) systems for AI, programmable systems, and software for heterogenous systems. By suitably deploying HPC solutions, his team reduced the training time of convolutional neural networks significantly, accelerating the model training for very large histopathology images from multiple days to under 30 minutes, enabling expedient diagnosis with large whole slide images. To democratize HPC-powered AI for agriculture and welding, he created an end-to-end solution. Using distributed deep learning enabled SSL techniques, Subramoni accelerated the labeling of large multi-terabyte agricultural image data and sets by over a factor of 15x on 16 GPUs, reducing the model training time from 7.8 hours to 31 minutes when compared to state-of-the-art solutions. He accelerated the data preprocessing and model inferencing steps by applying novel HPC-enabled techniques and model quantization and compression. His research aims at creating HPC middleware and supporting libraries for programmable systems like field-programmable gate arrays (FPGAs) for HPC and deep learning. Subramoni’s work on software for heterogenous systems led to him spearheading the development of the MVAPICH Message Passing Interface (MPI) library project group. These libraries are used by more than 3,400 organizations in 92 countries. In addition, he has worked on creating unified high-performance communication middleware that can be used by traditional HPC as well as emerging AI applications.

 

Tsui-Wei (Lily) Weng

Assistant Professor at the Halıcıoğlu Data Science Institute
University of California, San Diego

Tsui-Wei (Lily) Weng’s research is centered on trustworthy machine learning, with a focus on enhancing and ensuring the robustness and interpretability of modern deep learning systems. These systems are pivotal in various engineering and scientific fields, including computer vision, natural language processing, autonomous systems, healthcare, and data security. Weng has pioneered the field of robust machine learning by establishing theoretical and algorithmic foundations to assess and improve the robustness of deep neural networks (DNNs). Her notable achievements include developing fast, provable robustness certificates for DNNs, scalable solutions for robust learning, and safe approaches for reinforcement learning and control. In the realm of interpretable machine learning, her lab has led the way with innovations in algorithms for automated mechanistic interpretability across both vision and language domains. These algorithms are designed to elucidate DNN functionalities through human-friendly concepts. Her group has also developed automated and scalable techniques for learning interpretable DNN models, achieving the first efficient algorithm scalable to ImageNet without the need for collecting curated concept labels. Weng’s research is dedicated to shaping the next generation of AI and deep learning systems to be not only more capable but also more explainable, robust, and reliable – qualities essential for their deployment in safety-critical environments and for uncovering key failure cases and biases.

 

Tiwei Wei

Assistant Professor in the School of Mechanical Engineering
Purdue University

Tiwei Wei’s research focuses on solving the fabrication challenges and heat transfer issues in advanced semiconductor packaging and assembly. He is developing new materials and techniques for scaling 3D interconnect density, including through silicon vias (TSVs), through glass vias (TGVs), Cu/Sn microbump bonding, and Cu/SiO₂ hybrid bonding. In addition, his lab is investigating the fundamental thermal, mechanical, and electrical behaviors associated with these scaling 3D metal interconnects. Supported by the Semiconductor Research Corporation (SRC), his team reported the resulting stresses via Raman spectroscopy for in-house developed blind TSVs with diameters as small as 1 μm. Moreover, his team has explored integrating micro/nanoscale porous copper inverse opals (CIO) structure into fine-pitch Sn-based solder microbumps to enhance the thermal conductivity and mechanical reliability in 3D semiconductor devices. Wei also introduced hotspot targeted microjet cooling to address potential temperature non-uniformity from different functional blocks within processors. Additionally, Wei led a team to raise $1.8 million funding from the DOE ARPA-E COOLERCHIPS program to develop a chip/package level confined two-phase microjet cooling, which utilizes porous structure surface enhancement and phase separations. These innovative solutions are expected to pave the way for more efficient thermal management in high-density advanced packaging and 3D integration.