We appear to be in a transformative period where a new technology is about to completely alter how the games we love are made and work.
The current discussion is about "fake" frames which use rendering data rather than static pictures along with motion vectors to generate frames faster than the game engine and GPU can product them using traditional rendering techniques. I've been against even the base upscaling of DLSS and FSR, preferring a native resolution rendered image. But there are other uses beyond this which will change how games work.
The linked video shows using AI driven physics, but the example is realtime on just video. What about when this is integrated into a games engine and has much more information about how it will work? It also covers asset creation including ultra detailed model creation using images from an iPhone.
The current discussion is about "fake" frames which use rendering data rather than static pictures along with motion vectors to generate frames faster than the game engine and GPU can product them using traditional rendering techniques. I've been against even the base upscaling of DLSS and FSR, preferring a native resolution rendered image. But there are other uses beyond this which will change how games work.
The linked video shows using AI driven physics, but the example is realtime on just video. What about when this is integrated into a games engine and has much more information about how it will work? It also covers asset creation including ultra detailed model creation using images from an iPhone.