20.9 TF at 2.721ghz on 5nm process with 60CUs. This is why I am expecting PS5 Pro to reach 20 TF if it uses at least the 5nm node. And I don't see it using less than that one year after 7800 XT. Sony will need 2.6 ghz to reach 20 tf and I expect they will hit that (or very close to it) with their dynamic clocks.
Nope. Did you see the size of the OG PS5? I am beginning to think that a lot of people here just talk without really knowing what they are talking about.
The OG 2020 PS5, along with its size, drew ~220W on game load. That dropped to 202W with the 6nm revision, which is the same chip the PS5slim uses. That GPU you are talking about on the PC, doesn't even run at 2.7Ghz, it runs at 2.5Ghz and draws over 260W. And that is just a GPU alone. There is no way, not a chance in hell, that the PS5pro has a GPU clocked that high when there is also a CPU to contend with and it needs to fit into a chassis for a console. Just doesn't make sense.
Further, NEVER has a console variant of a PC GPU/CPU matched the PC equivalent in clocks/power consumption. They are always downclocked, and for good reason. If the PS5pro is on 5nm, Do not expect its GPU to be clocked anywhere above 2.4Ghz. If anything, 2.35GHz is more likely.
They are real, at least for Nvidia.
At 2750mhz 4090 you have 45TF INT32 or 90TF FP32
Back in the days with Turing and older GPUs you had for example 10TF INT32 or 10TF FP32, they had 1:1 cores, now since Ampere Nvidia GPUs have half the cores that can do both INT32 or FP32 and another half that can do only FP32. This gave them something around 30% boost in games with the same SM count.
68SM 2080Ti vs 68SM 3080 in Doom that likes FP32.
PS5 has 10.3TF INT32/FP32
IN32 is used something around 25 to 35% of the time in games.
I am sorry but I don't see how what you just posted here proves they are actually being used. I am not saying they aren't real, just that we aren't seeing them anywhere. And you shouldn't even be using a "benchmarking tool" for this argument, as the argument here is if its even being used in games.
Here's the problem. Let's take 3 GPUs from Nvidia for instance. the 2080, 3080 and 4080. Take a game like RE4 running on all 3 in 1080p and no RT so this way none of those GPUs can be bottlenecked either by RT performance or RAM as RE4 in 1080p peaks at 9.4GB RAM utilization, so only the 2080 may suffer since it has only 8GB, which mind you, should work in favor of the other GPUs.
2080 10TF - 98fps
3080 14.8TF is +48% vs 2080 or if using the claimed 29.7TF its +197% vs 2080 however - 130fps only +32% vs 2080
4080 24.4TF is +140% vs 2080 or if using the claimed 48.7TF its +387% vs 2080) - 200fps only 104% better than 2080
See what's happening there? The resulting performance is more in line with the actual TF difference and nowhere near the claimed TF differences. For giggles the 4090 runs at 228fps. And remember, the 2080 is the only GPU here that is even RAM bottlenecked. See why I am not buying it?