Putting to one side the final TF metric for PS5 – that we won’t know until we know. I’m still really interested in the design ethos for each console and specially the clocks chosen for the CPU and GPU, and how they vary between current gen game use and the PS3/360 gen use – assuming we are getting hardware B/C.
The clocks for last-gen CPUs were really interesting (IMHO) because they pitched at a level that made easy B/C impossible for the PS3 on PS4 and even though it was a simpler case on the 360 and the One – which I’ll explain in a second – with game industry distilling game logic for games into a single general purpose CPU core (with 2 hardware threads) running at 3.6GHz (on both systems) the APUs on PS4/One had no way of accommodating those main loop workloads across multi-core using half the clock speed without game re-engineering(cross gen patching on ps4) or extensive emulation work(Xbox One).
For the Xbox 360 its multi-core CPU for the most part had been a waste of die-space, because the Silicon Knights complaint of the original Unreal engine 3 powered Gears of war using multi-core rendering meant the 2nd and 3rd general purpose cores were do rendering workloads they weren’t suited to run, and presumably did so on all UE3 games there after, just for any extra rendering help they could give the xenos. So moving those games’ workloads to Xbox One CPU compute cores resulted in better performance because they were already light workloads and would be perfectly suited to APU cores in the emulation (AFAIK).
The PS3 unfortunately couldn’t do the same, as the SPU cores were similar efficiency as the APU cores, but had twice the clockspeed – and there were 6 SPUs and only 6 spare APU cores available and they were running at half the clock speed – so although much of those workloads in cross-gen patched games could be moved to the PS4 GPU, some of them just couldn’t because of clockspeed. An excellent comparison of Journey on PS3 versus PS4 by DF highlighted the removed workload in the higher resolution PS4 version. Playing through the Last Guardian also has at least one water section puzzle in which major slow down occurs that I would be shocked if the code that does water physics simulation in the unreleased PS3 version doesn’t run perfectly– and this brings me on to my last points.
PS3 and 360 gen clockspeeds aren’t power efficient for hot APU chips, so if Sony and Microsoft did allow the APU cores(in PS5/XsX) to run that high, would they limit them doing it when they were using a specific B/C mode that downclocked the GPU massively to keep the APU cool? If not, are higher clockspeeds an essential part of RT workloads ? Does going higher clock on CPU and GPU help reduce things like memory thrashing - by giving CUs enough clock cycles per frame to get work done in time with less memory redundancy acceses?