I think this whole reconstruction thing was a massive ball dropped by both Sony and MS. Especially Sony. With the PS4pro, Sony was way ahead of the ideology that rendering native resolutions was just wasteful. And I applauded them for their initiative to build custom CBR accelerating hardware into the PS4pro.
Then in 2018 when DLSS came to market, I thought that was the writing on the wall that the industry as a whole was going to take reconstruction tech seriously. Well, the industry did, but sony, MS and AMD dropped the ball here.
This is 2023... five years removed from DLSS and 7 years from PS4pro custom hardware (even though that wasn't that involved), and AMD still doesn't have a proper hardware-based AI reconstruction method, when reconstruction has become the most prevalent graphical feature in game design since we switched from sprites to polygons. That is just mind-boggling.
Its shockingly bad. Every decade or so, game design and or hardware makes one massive leap forward. Sprites to polygons and shaders to physical-based rendering to RT and reconstruction....etc. Right now, on the two single biggest rendering methods defining game design in this current phase, AMD is non-existent. That even Intel making their first discrete GPU did a better job in RT and reconstruction than AMD would have been enough for me to fire the whole R&D department at AMD.
And as for Sony and MS, they should have their own standardized reconstruction tech built into their SDK. Like Playstation super resolution (PSR) or something like that. The same way I believe that in the same way, we have havok or chaos physics middleware, there should be an RT lighting middleware that covers reflections, GI, AO, and shadows.
But AMD though, those guys are just certifiably useless in the GPU space. sometimes I almost wish that next-gen Sony should switch to Intel or even use an Nvidia GPU... though dealing with Nvidia is like dealing with the devil.
Just one thing to consider. The advantage that modern upscalers have today is that they are based on temporal information.
Spatial upscalers are very limited in comparison. Things like DLSS1, FSR1, Lazcos, NIS, etc, don't hold up.
And in this matter of temporal upscalers, the company that made the big push was EPIC, with TAAU in
UE4.19, released in 2018.
It took until 2020 for NVidia to catch up with DLSS 20 and even longer for AMD and Intel.
Today, Epic's TSR is a great solution for upscaling without using AI. Already better than FSR2.2
Sony is strange in the sense that they have some studios that already have a decent temporal upscaler. For example, Insomniacs IGTI.
They could have shared the tech with more studios and implemented it in more Sony games.
MS had some tech demos of AI upscaling, a few years ago, before the release of the Series S/X. But it seems they never did anything with it. A shame really.
I think AMD had good reason to go for not having dedicated hardware for RT and AI in RDNA2, as this was developed mostly for consoles. And in consoles, die space is at a premium.
Unlike a PC dedicated GPU, a console SoC has to have not only the GPU, but also the CPU, IO controllers, memory controllers, caches, etc.
On Ampere, the AI cores account for ~10% of the chip. And RT cores account for ~25%. This is great for performance, but it takes a lot of space.
Doing this on a PS5 chip would mean the console would have only 5-7TFLOPs. So it makes sense to have a hybrid solution for consoles.
What doesn't make sense, is that RDNA3 is doing the same thing. RT is still being done in the TMUs and AI is still being done in the shaders.
Yes, there are new instructions for both cases, that improve performance. But it's nowhere as good in performance and efficiency as the dedicated units that Intel and Nvidia have.
From the rumors we have, RDNA4 will fix these things, but AMD is lagging a lot.
Intel does have a huge problem with their drivers and shader efficiency.
Just consider that the A770 has 20 TFLOPs of compute, slightly lower than a 6800XT. But it performs closer to a 6600XT.
The A770 has a die size of 406 mm² on N6. Compared to a 6800XT that has a die size of 520 mm², on N7, but doesn't use the full chip.
So Intel has a good chunk of catching up to do with Battlemage.