" id="header">

Research trends: The future of computer graphics—and the tools we’ll use to create them

March 28, 2022

Tags: Graphics (2D & 3D)

Wondering which trends and ideas will shape technology in 2022 and beyond? We talked to a few members of the Adobe Research team to find out.

Nathan Carr, an Adobe Research Senior Principal Scientist with a focus on computer graphics, gave us a peek into the future of 2D and 3D imaging. He also shared his vision for how we’ll interact with creative tools in the future.

Can you tell us about a trend you’re following in 2022?

One popular trend is the development of new representations for 3D assets. In the past we used to store things in the form of triangles and surface patches. In this new trend, we’re looking at neural representations to come up with new forms of information that compress and compact material, appearance, and geometry into one single optimized form that can be displayed and manipulated.

You can think of this as a next-generation 3D photograph. It’s a way to capture and represent the shape and appearance of the world so that we can start interacting with it in a deeper, richer way. I think these new representations will open a lot of exciting opportunities. High-end effects like raytracing are now being used in games to bring an unprecedented level of visual quality. I wonder whether a new evolution in rendering is about to take place that mixes light transport simulation with machine learning to produce photorealistic imagery even more efficiently. 

What do you hope researchers can accomplish this year?

One thing I’m thinking about deeply is the emergence of algorithms that will transform how we operate with computers. I think we want to move to a place where humans are interacting with computers in their own language and leveraging intuition about the world, rather than forcing humans to operate in the language of computers.

We’ve seen this trend over time. I think it will accelerate, and this has some deep ramifications. For example, today there’s a huge learning curve that artists go through to understand how computers represent things. In particular, 3D design software is incredibly complex, taking years to master.  Now we’re starting to be able to train algorithms that enable computers to meet humans in their own frame of mind, which will democratize creative tools so more people can access them. Computers will operate in a collaborative manner, anticipating and assisting in very complex tasks with high-level guidance from humans. With these new systems, artists and creatives will be able to produce content more efficiently while being more expressive.

What do you think people will be talking about at conferences and in papers this year?

As we talk about computers operating in the language of humans, we’ll need to collect a lot of data about the world.

For example, classically, we have represented images or 3D objects in very primitive forms without any extra information. So you can look at the colors and try to guess what pictures are, or if it’s a 3D triangle mesh, you can look at the shape of the geometry, but there’s only so much information there.

But if you pair this with a knowledge base of what is in millions of photos, or if you have huge collections of 3D shapes that people have authored and you know what their semantic meanings and relationships are, then suddenly you can apply that learning to unlock a lot of new capabilities. I think we’ll be talking about this transformation.

Which trends are you excited about beyond your field?

Hardware trends are exciting because a lot of what we do with machine learning and AI is often limited by hardware and compute. Even when we manage to train useful AI algorithms, we often struggle to deploy them on low-power devices because there just isn’t the hardware capability. This, however, is changing rapidly.

As a computer graphics research scientist, it used to be that you just needed one nice computer and a compiler, and you could do your work. Now you need a cluster or a supercomputer at your beck and call to train the latest machine learning algorithms. So how does this scale to every developer and creator? The power budgets are not on a sustainable path and need addressing. This will require not just innovations in hardware, but a co-evolution of the software algorithms behind machine learning models.

I also wonder about new forms of “fuzzy” computing (e.g., quantum computing) where we might be willing to tolerate a little bit of imprecision or uncertainty in an answer. If this form of computing can execute magnitudes faster with lower power requirements and reasonable accuracy, it may be worth the trade-off.  These new processors may require an entire reinvention of the algorithms we use and changes in the ways we write code. Regardless, being able to train models at scale with massive data and deploy them will require deep ingenuity. I believe such issues will be at the forefront of computing over the next decade.

Wondering what’s going on in 2D and 3D graphics at Adobe Research? You can learn more here.

Related Posts