Now, another post which we should all take with a massive grain of salt (this is based on the current available information on both consoles).
We can all agree that the Xbox Series X has the advantage in GPU grunt, with an advantage of ~1.8TF. Let's make this 2TF, to round things (12TF for the Xbox Series X, 10TF for the PS5). Now, have we considered the implications Smart-Shift may have for games? Going by AMD's data, smartshift
improves performance up to 14%, which is nothing to be frowned upon, and matches what Cerny claimed would be the PS5 pulling above it's weight. Now, we all know that consoles and PC's NEVER reach their peak TFlop performance. let's attribute an average of 80% peak utilization, with an average around the 70% mark. I think I'm overshooting numbers here, game devs feel free to correct me.
This would mean that the peak and average utilization numbers for next gen consoles would be
- Peak
- Xbox Series X - 9.6TFlop
- PS5 - 8Tflop
- Average
- Xbox Series X - 8.4Tflop
- PS5 - 7 Tflop
Now, why does this happen? Well, even though you are running a clockspeed of X, you do not have ALL transistors on at once. If you turn on all your transistors at once, odds are your system will shut down when talking about consoles. This is why your fans will kick in high gear when playing GOW in your PS4, for example. You are turning more transistors on, consuming more power, therefore generating more heat. And here's where the variable clocks come into play + fixed power budget + smartshift.
With Smartshift, your system will automatically assign whatever unused power your CPU or GPU have to the other party, meaning that you can do more with the same clockspeeds. No system will have "full" utilization of their available GPU / CPU power at ALL times, meaning you will have some leftover power. This allows you to switch on more transistors in one of them (while the other is under lower utilization), providing that performance improvement AMD mentions. So, what does this mean?
It means that either Sony managed to pull a rabbit out of their Hat, or that this will be a massive failure. However, assuming this is a success and does perform as AMD claims:
- Peak
- Xbox Series X - 9.6TFlop
- PS5 - Anything between 8TFlop and 9.12Tflop
- Average
- Xbox Series X - 8.4Tflop
- PS5 - Anything between 7Tflop and 7.98 Tflop
The difference is not, obviously, enough to catch up with the XBSX, however it does close the gap.
There is, however, the downside. If the game is not "properly optimized", paraphrasing Cerny, the console may downclock one or even two components for a "couple percent". This difference is negligible.
Notes: there's a lot of assumptions, as I've mentioned, and you should take this with a massive grain of salt. I just wanted to provide added context to this conundrum.