I have spent my career at the forefront of graphics technology, with the added good fortune of experiencing a breadth of it — from game engines and film rendering to hardware and middleware to AR and VR headsets. I have had plenty of work because everybody, including myself, wants more immersive visual experiences. It is a basic fact: We are visual-first creatures and “a feast for the eyes” is never satisfied.
We are entering a new era in graphics, and it is not going to arrive in the form of any one device. You may want an immersive display in your living room, in your car, in your hands or on your head. Regardless of the preference, delivering a consistent and high-quality visual experience across devices is a challenge that needs to be solved. This is what keeps me excited doing my line of work and it is one of the reasons I chose to join Intel.
The ultimate prize is not only richer games, but also future immersive experiences that realize the metaverse vision Raja Koduri outlined last December and include remote work and presence, photoreal simulation and 3D user-"photographed" virtual environments. To achieve that, we need visual compute to become more ubiquitous, more efficient, more accessible and more intelligent.
Our objective is to build the best possible graphics solutions, but we also want to move forward the entire field, the whole ecosystem. And we can only do that by being open, collaborative and forward-looking.
To work on this, I am leading and have established a new graphics research function within the Accelerated Computing Systems and Graphics Group (AXG). My team’s mission is to bring this future to bear by advancing the algorithms, systems and hardware behind graphics. Ultimately, our goal is to deliver the best immersive visuals to everybody on the planet — to all market segments, use cases and experiences that require such visuals – think of immersive entertainment and games, future virtual social interactions, democratized content creation and realistic visual effects.
As testament to our open and collaborative approach, the team has made a new Sponza scene available for download by the research community, for free. This is a modern version of the iconic sample I published in 2010 when I was with Crytek. It became a default scene across gaming, visualization, film and other fields of research and development -- published and referenced in thousands of academic and research papers.
After more than a decade, the Sponza scene was overdue for an update. The new scene incorporates physically based materials with 4K textures and high-resolution geometry, photogrammetrically matched to the real Sponza Atrium in Dubrovnik, Croatia. We built it as standardized as possible and it is offered in several modern formats, with a particular goal of making graphics research more reproducible.
Reproducibility is a common struggle in graphics research. With the new Sponza, as a researcher, if you publish a new method, you can render using the professional camera setup and let the world compare your results to the reference images to literally see the differences and impact your research makes.
And depending on the type of research that is being done on graphics — the level of detail or light transport, shading or materials, for instance — we have provided a collection of additional package options to the base Sponza scene with advanced lighting, geometry or materials, so that researchers can choose one or more packages.
The new Sponza scene and the additional package options reflect some of the near-term challenges our research team is looking to tackle and solve. But there are multiple major paradigm shifts happening in graphics that our team is thinking about daily.
First, gone are the days of simple scenes and illumination. Currently, the visuals in film are elevated to a whole new level of photorealism thanks to the physically correct materials and light transport simulation, often referred to as path tracing. Path tracing is becoming our everyday life, not just in offline film rendering. Games shifted to physically based materials over a decade ago. Due to the advances in ray tracing performance, we are also witnessing a shift toward real-time path tracing in games. While the visual impact of path tracing is clearly demonstrated in film, a lot of challenges remain to be solved to make it practical in real-time graphics and games.
Second, many of the things we do in graphics are smart approximations. Machine learning has recently been shown to be a great approximation tool for challenging problems. In time, machine learning will also revolutionize graphics. The full workflow, from creation to consumption of visuals, can be deeply reworked using machine learning tools. We are on the cusp, and our work on Xe Super Sampling (XeSS) technology is just the beginning.
Third, we are witnessing a boom in user-generated content, particularly in video. Now imagine that next boom in three dimensions — and making this content just as easy to produce, distribute and consume everywhere, that is consistently immersive and high quality, and done in real time.
The graphics field is evolving quickly and scaling it out to meet the upcoming demands will be a breathtaking journey. That is why I am so excited about visual compute. My team is looking at all the game-changing technologies required to bring this future forward. Today, and into the future, graphics is getting astonishingly more important.
Anton Kaplanyan is vice president of the Accelerated Computing Systems and Graphics Group, chief technology officer and director of the Graphics Research Organization at Intel Corporation.
We are always looking for more brilliant researchers at the intersection of graphics and AI. If you enjoy pushing pixels toward immersive visuals, come join us on this new journey.