What has any of this got to do with what I originally wrote? You're just stating the bleeding obvious '7nm production is much more expensive than previous iterations'. No shit, how about rebutting the points I made earlier?
You pigeon-holed on 7nm wafer costs, when that isn't even in contention here. I'm sure MS and Sony know full well how expensive the tech is. My point was and is about 'yields'. You stated smaller size = better yields and I showed you why that wouldn't be the case as there are several factors at play which dictate the yields. If Sony are going for absurdly high 1.8-2.0GHz clocks with realistic TDP the yields would be significantly poorer than a bigger GPU clocked lower. And no, 10% lower TDP wouldn't magically be a game changer. A 5700XT at its peak clock draws 227W of power. 10% reduction gets you at ~200W mark but that is a GPU clocked at 1905MHz, now imagine what would happen at 2.0GHz?
Also keep in mind these power curves don't apply the same way for console chip, that 5700XT is a very high quality piece of silicon, the same chip is used in 5700 but carefully hand picked for 5700XT because of its higher quality nature. Expecting a console binned chip such as in PS5 or Series X to perform on same level as 5700XT is lunacy as they will never have as good as thermals and power draw.
[email protected] is still way too high. Its equivalent would be 5700 which has
[email protected]. How are Sony gonna squeeze out 175MHz-375MHz more speed out of it without destroying power curve and fucking over their yields?
I'm gonna summarize once again in hope you get it, if you continue to be deliberately obtuse about it and post some more irrelevant stuff then don't reply.
Even if I agree with you, that going narrow and fast has some merit. It has to be under some realms of practicality. Consoles are generally underclocked compared to their PC counterparts because the mantra has always been about efficiency in power, performance, thermal and cost. Now let's assume that Sony decided to push the envelope and asked AMD to sandwich their GPU between 5700 and 5700XT @1.8GHz. The silicon required to fit into that wouldn't be as high quality as a 5700XT. But what would happen to those chips which have 36CUs but can't hit 1.8GHz? Or if they have 36 active CUs but can't hit 1.8GHz without going past ~200W? What would Sony do with these binned chips? Throw them away?
This is where your entire argument breaks down, if Sony has a limited supply of 7nm wafer sheets then it DOESN'T make sense for them to design an APU in ~300-325mm2 territory, when in order to achieve those targets they have to clock their GPU at 1.8-2.0GHz while maintaining the power curve. All those precious $ they're trying to save on wafer sheets to the tune of 50-70mm2 would backfire because yields will be incredibly poor which means more precious wafer sheets wasted which means limiting supply, and in the end will cost them more money to make their APU.
MS aren't stupid when their Series X SoC is that big, they are facing the same hurdles as Sony do as far as production is concerned. They could've gone 'narrow and fast' but didn't because its just more costly and inefficient even on a 'more mature N7P node'. Wide and slow is what MS went with, and is very likely the way Sony is heading too. Github leak while being legit lacks context, don't take it as gospel. My interpretation is, that it is a very early dev-kit Sony made to allow their developers to learn the tools and have games ready for PS5 launch. I don't know anything about what their targeted performance is, it could be 9.2TF, or 10.4TF or higher. My point is the final retail PS5 will achieve those targeted performance by having a GPU which is NOT clocked @1.8-2.0GHz. It'll be done by a bigger NAVI at lower clock speeds.