Graphics Numerical Hardware & System-Level Design
Mission
The Graphics Numerical Hardware and System-Level Design (NSD) group works on all aspects of numerical hardware design and high- and higher-level synthesis.
Our transdisciplinary work spans machine-readable specifications, C++ modeling, high-level synthesis, automatic and manual register transfer level (RTL) creation, and optimization with formal verification applied throughout.
We operate as an applied research group: Everything we do is research driven and affects real products. We act as an internal consultancy working with various teams on multiple concurrent projects within the division and across the company: architecture, microarchitecture, modeling, design, verification, and validation.
Our projects may begin as manual hardware modeling, creation, or optimization but these innovations invariably get generalized, automated, disseminated, published in academia and in the industry, and patented. This has led to the emergence of four pillars on which the group rests:
- Consultancy: Manage multiple concurrent, production-facing projects.
- Tools: Create and refine research tools to improve various aspects of our hardware design processes. Mature concepts are then productized in collaboration with EDA vendors and licensing IP as required. Academic tools are also used in collaboration with universities.
- Verticals: End-to-end, domain-specific tool requirements (such as datapath component libraries, hashing, and homomorphic encryption) emerge from our consultancy projects and tool development.
- Research Program: University collaborations and an intern program drives our innovation pipeline. We have active collaborations with Imperial College London, University of Washington, University of Utah, Rutgers University, Cornell University, and University of California Los Angeles (UCLA). We drive the most important applied research by having members of our group simultaneously conduct PhD research, providing continuous knowledge transfer and influencing academia and industry.
NSD was founded on the premise that deep mathematical and logical reasoning, and research in general, can deliver significant hardware benefits in terms of reliability, speed, area, power efficiency, and implementation. As internal consultants, we work to extract the true necessary requirements, phrase the right design problem, explore the design space, and fully formally prove the correctness of experimental designs. Quality means that challenging legacy decisions comes with the territory, innovation and insight are prized—every bit matters.
A full-flow vision has emerged from our transdisciplinary work, establishing a path to automation from machine-readable numerical hardware specifications all the way through to optimized hardware. This vision won the 60th 2023 Design Automation Conference Best Paper award for front-end design.
Research Library
Previous Talks
High Schools
Bay Area Mathematical Adventures
Silicon arithmetic combines mathematics, hardware, and computing to make the fastest, smallest, and most energy-efficient circuits that power the increasing range of devices we rely on every day.
October 19, 2021
Applied Research
HiPEDS EPSRC Centre for Doctoral Training in High-Performance Embedded and Distributed Systems
University of California San Diego Jacobs Undergraduate Mentoring Program & IEEE Society
Walking the Line of Applied Research
Applied research strikes the perfect balance between research and development in creating fundamental innovation that actually gets used. This talk offers insight into the technical, human, and managerial skills required to do this hard and useful work.
May 24 and December 15, 2022
Imperial College London Electrical & Electronic Engineering Society
Industrial-Sponsored PhDs and Industrial Research
December 1, 2022
Webinars & Conferences
Formal Validation of a Datapath Pipelined Design with VC Formal* Datapath Validation (DPV) Webinar
Formal verification offers risk elimination and can literally vend bugs. Tools such as DPV can be used far beyond their normal use case to add value in correctness, understanding, and even performance of your designs. This talk provides the vale and advances of DPV use within Intel graphics.
February 1 and 8, 2023
28th Asia and South Pacific Design Automation Conference
Automatically Generate a Complete Polynomial Interpolation Design Space for Hardware Architectures
January 16-19, 2023
Directions in Numerical Hardware Design Methodology
Competitive GPU hardware design requires optimizations at an algorithm, number format, precision, accuracy, and logic gate level. This presentation discusses the progress towards a fully automated tool chain that takes machine-readable numerical specifications through algorithm exploration, precision, and accuracy tuning, and behavioral RTL creation and optimization with formal verification and validation used throughout.
January 19, 2023
Group Overview
Numerical Hardware Group Background & Internships
Imperial College London, Electrical and Electronic Engineering Society
January 27, 2022
Carnegie Mellon University, Electrical and Electronic Engineering
GPUs: The Datapath Goldrush
September 9, 2022
September 15, 2021
September 23, 2020
Carnegie Mellon University Math Club
GPU Datapath: Where Math, Hardware & Software Meet
February 19, 2020
February 24, 2021
Carnegie Mellon University, Electrical and Electronic Engineering, Chemistry Department
GPUs: The Datapath Goldrush
October 8, 2019
University of California, San Diego, Theta Tau Professional Development
Math & GPUs
January 22, 2020
University of California, Davis
GPUs: The Datapath Goldrush
October 17, 2019
Georgia Tech
GPUs: The Datapath Goldrush
September 12, 2019
Sacramento State University
GPUs: The Datapath Goldrush
October 6, 2022
Formal Verification & Specifications
Chinese Academy of Sciences, Institute of Computing Technology
Formal Verification in Digital Design: Power Promise and Pitfalls
Over the last decade, formal verification has become truly mainstream as a standard part of design verification methodologies. Formal verification offers the tantalizing promise of exhaustive bullet proof proofs of correctness. Here are the fundamental obstacles to its use: feasibility and scalability, specifications and phrasing, and culture change.
October 25, 2022
Formal Methods in Computer-Aided Design 2022
Small Proofs from Congruence Closure
October 20, 2022
17th International Workshop on the ACL2 Theorem Prover and Its Applications
Formal Verification Challenges for the GPU Numerical Algorithm
May 26, 2022
What Every 21st Century Computer Scientist Should Know About Floating-Point Arithmetic
July 6, 2022
Learn about efficient propagation of metadata across e-graphs.
June 14, 2022
Methodology
Programming Languages, Analysis, and Verification Get-Together
E-graphs for Exploration: All Implementations are Equal But Some are More Equal Than Others
E-graphs have been around since the 1970s but in recent years they've taken aim at exploration and optimization problems. Providing a compact representation of equivalent designs and a complete history of the design space explored, they've proved useful in a range of domains. Researchers use e-graphs to automate numerical stability analysis, datapath hardware design, rewrite rule synthesis and much more.
December 6, 2022
Imperial College London, Electrical & Electronic Engineering Society
Should Hardware Design Be So Much Harder than Software Design?
November 15, 2022
Imperial College London, Electrical & Electronic Engineering, Circuits and Systems Group
Directions in Numerical Hardware Design Methodology
Competitive GPU hardware design requires optimizations at an algorithm, number format, precision, accuracy, and logic gate level. We present progress toward a fully automated tool chain that takes machine-readable numerical specifications through algorithm exploration, precision, and accuracy tuning and optimization. The approach uses formal verification and validation throughout.
November 11, 2022
Imperial College London, Electrical & Electronic Engineering, Circuits and Systems Group
High-Level Synthesis in High-Performance Graphics Hardware
High-level synthesis elevates architectural and behavioral description to higher levels of abstraction while automating the design space exploration. Its application to high-performance graphics hardware provides an ecosystem where performance, power, and area are being driven to new extremes. The assumptions, theories, and capabilities are challenging the status quo and being challenged themselves.
November 9, 2022
29th IEEE Symposium on Computer Arithmetic
Automatic Datapath Optimization Using E-graphs
Novel Architecture and Novel Design Automation (NANDA)
ROVER: RTL Optimization via Verified E-graph Rewriting
Manually rewriting RTL to improve hardware performance can be broken down into a sequence of transformations. Can this process be automated? E-graphs offer a rewriting approach that maintains the full history of design space exploration and enables formal verification.
September 5 and 12, 2022
Automatic Generation of Complete Polynomial Interpolation Hardware Design Space
Piecewise polynomial approximation is a standard technique for implementing complex functions. This talk describes the complete design space for these implementations and how it enables unique optimizations.
July 6, 2022
Applications Driven Architectures 2022 Annual Symposium
FPBench: Specifying the Behavior of Numerical Programs, Principled Multiprecision Hardware Micro-Architecting
The quality of numerical hardware implementations hinges on defining and exploring the design space. FPCore offers a way to specify hardware operations and various number formats, precisions, and accuracies enabling efficient specification and exploration.
May 12, 2022
Automatic Datapath Optimization Using E-Graphs
Manually rewriting RTL to improve hardware performance can be broken down into a sequence of transformations. Can this be automated? E-graphs offer a rewriting approach that maintains the full history of design space exploration and enables formal verification.
May 5, 2022
University of Utah, School of Computing
The Numerical Hardware Design Landscape—Challenges and Opportunities
How can a robust design methodology be established that optimizes numerical hardware at an algorithm, number format, precision, accuracy, and logic-gate level?
April 7, 2022
Implementing, Optimizing, Verifying, and Validating Mathematical Hardware
The challenges of optimizing numerical hardware at an algorithm, number format, precision, accuracy, and logic gate level and their formal verification and validation challenges.
July 14, 2021
Imperial College London, Electrical and Electronic Engineering, Circuits & Systems Group
On the Nature of Manual RTL Optimizations
What are the types of optimizations performed by expert numerical hardware designers?
February 2, 2021
The Team
Dr. Theo Drane
Theo started working for the Datapath consultancy Arithmatica in 2002 after completing a mathematics degree from the University of Cambridge in the UK. He leads the applied research Graphics Numerical Hardware Group within the Intel® Graphics Group.
Theo's patents have been used and licensed by Mentor Graphics*, Synopsys*, and Cadence*. His hobbies include writing short stories, composing, and traveling to Madeira.
Christopher Poole
Christopher pursued a bachelor's degree in mathematical science (concentrating in applied and computational mathematics) at Carnegie Mellon University in Pittsburgh, Pennsylvania. His studies included algebraic structures, numerical methods, and machine learning. Christopher participated in research regarding the effects of self-driving cars on highway traffic and interned as a data analysis at Emcor Facility Services. His hobbies include rugby, gaming, and backgammon.
Sam Coward
Sam completed a bachelor's degree in mathematics and a master's degree in scientific computing at the University of Cambridge in the UK. Throughout his studies, Sam took an interest in statistics, group theory, quantum mechanics, and computing. This led to internships that tackled formal verification at Cadence Design Systems and Riverlane (a quantum computing startup), and participation in a design optimization project at Intel. After graduating, he spent time at Nokia* developing firmware for network processors. He is now studying for a PhD in the Department of Electrical and Electronic Engineering at the Imperial College London. This involves close collaboration with the Graphics Numerical Hardware Group within the Intel Graphics Group. Outside of work, he is a keen squash player and, being London based, also enjoys a good theater trip.
Dr. Bill Zorn
Bill earned his PhD at the University of Washington in 2021 by conducting research near the intersection of programming languages and computer architecture. His focus is on finite-precision number systems, finding ways to make them more transparent to programmers and more amenable to efficient hardware designs. In addition to his work at Intel, he is also an organizer for the open source FPBench project. In his spare time, he enjoys hiking (preferably with dogs) and playing games from the late 90s on unnecessarily overclocked gaming rigs.
2023 Interns
Jordan Schmerge
Jordan is originally from Colorado and went to the Colorado School of Mines. He is interested in formal verification, program equivalence, and theorem proving and is looking forward to exploring validation on graphics floating-point hardware and algorithms. In his spare time, Jordan is an avid reader and puzzler.
Brett Saiki
Brett is an undergraduate student at the University of Washington, double majoring in computer engineering and mathematics. He is a member of the Programming Languages and Software Engineering (PLSE) research lab and works on projects involving computer number systems and term-rewriting techniques. In his free time, he enjoys running, reading, and listening to music.
2022 Interns
Bryan Tan
Bryan Tan is a second-year undergraduate studying for a master's of engineering degree in electronic and information engineering at Imperial College London. His university projects include a numerical circuit simulator implemented in C++, a fully tested MIPS CPU and a complementary C compiler, and the Arduino* microcontroller for Imperial's Formula Student race car. He is interested in machine learning and signal processing, high-level synthesis, and statistical methods for quantitative finance. Bryan lives in Sydney, Australia but is based in London during term time. Outside of work he enjoys playing badminton on the Imperial Medic's team.
Brett Saiki
Brett Saiki is an undergraduate student at the University of Washington, double majoring in computer engineering and mathematics. He is a member of the Programming Languages and Software Engineering (PLSE) research lab and works on projects involving computer number systems and term-rewriting techniques. In his free time, he enjoys running, reading, and listening to music.
Avi Darbari
Avi Darbari is a secondary school student in the UK. He was awarded a silver medal in the Junior Mathematics Olympiad, (awarded to 1200 students out of hundreds of thousands of participants). He was the only UK finalist of 20 in the Desmos Global Maths and Art Competition from 10,000 submissions. He is a marketing and creative artist for Axiomise*. In his spare time, he plays piano, and uses vector graphics and 3D modelling applications.
Rohan Udupa
Rohan Udupa is a student at Folsom High School who will graduate in 2023. He participates in local coding tournaments, robotics clubs, and the Platinum-ranked and state-recognized Cyberpatriots XV team. He enjoys running and is part of the school cross-country, track, and field teams.
Om Joshi
Om Joshi is an undergraduate at The University of Texas (UT) at Austin, majoring in electrical engineering, mathematics, and Plan II (UT's interdisciplinary liberal arts honors program). He works in a research lab that builds superconducting microwave circuits for quantum computing applications. In his free time he plays pick-up basketball, biking, and the violin.
2021 Interns
Mindy Kim
This high school intern will graduate in 2022 from Folsom High School and plans to major in computer science, specifically in the AI cluster. Mindy gained experience through hackathons and competitions. She won a silver medal for computer programming at SkillsUSA, and founded and organized COVID Hacks, an international hackathon with roughly 300 participants from over 11 countries.
Kim is the president of the Robotics Club, Interact Club, and senior treasurer of the Competitive Speech and Debate Club at school. She works as a curriculum manager at Inspirit AI, an online education program that teaches high school and middle school students about machine learning. In her free time, she reads, plays the flute, and goes out with friends for boba tea.
Om Ajudia
Om is an intern from the University of California, Los Angeles (UCLA) who is studying for a bachelor's degree in applied mathematics and a minor in statistics and computing specialization. This is his first work experience outside of tutoring and grading in college and high school. In his free time, Om plays Spikeball* and volleyball, and works puzzles.
Bryce Orloski
This intern graduated in 2020 from Carnegie Mellon University with a bachelor's degree in mathematics and an additional major in computer science. He is working on a PhD in mathematics at Pennsylvania State University. Bryce's experience includes teaching, grading, tutoring, and mathematics research. He held a hardware optimization internship at Numerical Hardware Group in the summer of 2020 and 2021. He enjoys puzzles and playing the piano.
Dr. Bill Zorn
Bill earned his PhD at the University of Washington in 2021 by conducting research near the intersection of programming languages and computer architecture. His focus is on finite-precision number systems, finding ways to make them more transparent to programmers and more amenable to efficient hardware designs. In addition to his work at Intel, he is also an organizer for the open source FPBench project. In his spare time, he enjoys hiking (preferably with dogs) and playing games from the late 90s on unnecessarily overclocked gaming rigs.
2020 Interns
Venkata Sai MadhuKiran Harsha Nori
Venkata is designing a unique, relative-timed system-on-a-chip (SoC) as a part of his PhD research. He grew up in Hyderabad, India and received his bachelor's degree in electronics and communication engineering from Osmania University. At the University of Utah, Venkata completed a master's degree in computer engineering and is a PhD candidate. His internship at Granite Mountain Technologies involved developing hardware using relative timed techniques. Venkata's research interests include relative timed design, asynchronous circuits, electronic design automation (EDA) of asynchronous circuits, and verification. In his free time, Venkata enjoys hiking and learning music, policy, history, and languages.
Tianen Chen
Tianen is in his third year of PhD research in computer engineering at the University of Wisconsin-Madison at WISEST, the university embedded systems and computing laboratory. He received a bachelor's degree in physics and electrical engineering from Carleton College and Columbia University, respectively. Tianen's research focuses on approximate computing methods applied to logic synthesis of deep neural networks. His past projects include approximate exponentiation methods. Previously, he interned at Seagate Technology* and has two summers of research experience in particle physics. He enjoys basketball, running, and playing soccer.
Bryce Orloski
This intern graduated in 2020 from Carnegie Mellon University with a bachelor's degree in mathematics and an additional major in computer science. He is working on a PhD in mathematics at Pennsylvania State University. Bryce's experience includes teaching, grading, tutoring, and mathematics research. He held a hardware optimization internship at Numerical Hardware Group in the summer of 2020 and 2021. He enjoys puzzles and playing the piano.
2019 Intern
Sam Coward
Sam completed a bachelor's degree in mathematics and a master's degree in scientific computing at the University of Cambridge in the UK. Throughout his studies, Sam took an interest in statistics, group theory, quantum mechanics, and computing. This led to internships that tackled formal verification at Cadence Design Systems and Riverlane (a quantum computing startup), and participation in a design optimization project at Intel. After graduating, he spent time at Nokia* developing firmware for network processors. He is now studying for a PhD in the Department of Electrical and Electronic Engineering at the Imperial College London. This involves close collaboration with the Graphics Numerical Hardware Group within the Intel Graphics Group. Outside of work, he is a keen squash player and, being London based, also enjoys a good theater trip.
Explore the world of Intel’s open platform projects, contributions, community initiatives, and more at open.intel.com. |