He also said it would be 8TF, without SSD, without RT.
As far as I read, Geometry Engine > VRS.
How is GE better than VRS when RDNA1 have Geometry Engine (meaning, by extension, unless AMD completely removed it for RDNA2, XSX also has Geometry Engine)? Also, keep in mind that GE and VRS are completely different things being done at different points of the graphics pipeline, so they aren't even directly comparable.
People are having this take from what Matt said on Twitter but he was merely saying that VRS alone isn't efficient if other parts of your pipeline aren't optimized, which is 100% true. He didn't say that to make a comparison directly, like some of you are taking it. His comment was moreso alluding to Primitive Shaders, since in the Eurogamer article on PS5 (as one example), they specifically do mention Primitive Shaders. So Matt is basically saying you need something to cull unneeded graphics data before trying to optimize the output with something like VRS....
...which the XSX has in the form of Mesh Shaders, introduced in RDNA2 but previously present on Nvidia GPUs. So if Sony and other locations are specifying GE and PS, those were already things in RDNA1 and it's logical to assume the GE is in RDNA2 while PS has generally been replaced with Mesh Shaders. You can read into that however you want, but I take it to mean Sony have stuck (for whatever reason) with Primitive Shaders instead of Mesh Shaders, but might've customized parts of the GE and PS off RDNA1 to have equivalent feature sets of RDNA2 such as RT (adding RT customized support in the CUs), VRS (maybe in equivalent part of the RDNA2 pipeline, or implemented differently in Sony's version), etc.
There's literally no grounds in comparing Geometry Engine to VRS because they aren't even doing the same thing...
------
EDIT: Also, JFC some of you guys really need to get a life. Insane persecution complex after one source and one guy in particular, simply because they have critical thought on your brand of plastic and a preference themselves for another brand of plastic that nonetheless doesn't interfere with their ability to ascertain neutrally the vast majority of the time.
You're literally cherry-picking examples to put a spotlight on and shine almost out-of-context at times, to try discrediting them. It's beyond pathetic. Does every person need to lick the same boots you do in order to be deemed valid in your eyes? Is ANY form of dissent grounds for justifying sullying of their character and integrity to you?
You know who you guys REALLY look like right now? The same twits on Twitter who dig up years-old "problematic" posts to try getting people cancelled in Current Year™ . The same people who whine over every single insignificant opinion that's different from their own, and launches some crusade of public shaming.
That. Shit. Is. P a t h e t i c. It's the toxicity of online discourse, this extremist mentality people in their echo chambers and bubbles have created for themselves painting an "us vs. them" mentality.
That shit is pathetic enough from political talking heads and SJWs/anti-SJWs bickering back and forth, yet now it's even infected gaming among platform brands. The same toxic energy, the same smug moral superiority of flawed righteousness. Pathetic.
So it is 7nm+ then. I saw some speculation that it would be higher than that.
The consoles are 7nm DUV enhanced. Which is moreso 7nm. Some RDNA2 GPUs will be 7nm DUV enhanced (the consoles, some upcoming GPUs), others will be 7nm EUV (7nm+, mainly GPUs).
AMD seems intent on moving quickly to 5nm, however, for RDNA3.
Are you comparing overclocking RDNA1 with clock increase achieved with a new architecture (RDNA2)?
Will I overclock the 2080TI and say that increasing the clock on the Ampere doesn't improve anything?
I only have two options about DF in this case:
a) intellectual dishonesty.
b) stupidity
Maybe you should look into GPUs of same architecture and see what higher clocks actually improve? The two things that are most frequency-bound on GPUs in terms of impacting performance are the cache speed (continuous/sequential bandwidth, not parallel bandwidth which depends on amount of physical cache on the GPU) and pixel fillrate. We at least know this looking at many AMD GPU benchmarks.
Those are the two main areas where the smaller yet faster-clocked GPU sees improvements in, and doesn't take into account the amount of power you need to drive in order to get those frequency gains (going by Cerny's own figures at Road to PS5, their system has a 5:1 ratio between power-to-frequency for the gains they have in the clock, which should indicate they are well outside of the sweetspot range for RDNA2 on 7nm DUV enhanced, otherwise the ratio would be much lower).
Almost everything else in terms of GPU performance benefits from a physically larger GPU, assuming the GPUs are of the same architecture and generation. Vast majority of benchmarks verify this. So no one is really saying the faster clocks don't bring any benefits. The thing is they would bring more benefits if the GPU itself were of similar size to the competitor offering that happens to be physically larger and therefore benefits from more L1 and L2 cache, potentially more L3 cache, potentially more ROPs, TMUs etc (talking AMD here).
That's just the bare truth.