There's one reason this doesn't really add up: why do all this work for backwards compatibility through dual-GPUs and literally only have one of the GPUs even doing it, when a larger single GPU die is both potentially cheaper and more logical? The PS5 chips we've seen results for (Gonzalo, Ariel and Oberon, and I believe others have said that Ariel is basically an earlier version of Oberon) would be 40CU chips. If people are claiming a figure like 13TF, then across both chips they'd be disabling four cores, for 72CUs @ ~1420MHz.
...but if that's the case, why would the PS5 devkit need the cooling it seemingly has? From the looks of it that type of cooling is overkill for a system with GPUs running that low in clocks. And we've already seen Oberon testing at 2GHz...why do that for a system intended for a dual GPU setup? Unless, what, they have one Oberon chip at 2GHz or some super-high number like that, and the other at sub 1000MHz (well below any modern GPU card's clocks)!? So in reality wouldn't PS5's TF number be closer to, say, 15.6TF-16TF (assuming they could at least match XSX's possible GPU clock)? But there's absolutely no one credible throwing that figure around.
The other idea would be that in looking for a somewhat sane price point on a dual GPU system, they'd clock them both insanely low to keep the cooling costs down. But really, even a modestly pricey cooling setup is still cheaper than a second GPU (and the costs in implementing it in the design). So even with that in mind a dual "small" GPU setup & reduced cooling, is going to be a higher BOM than a monolithic larger GPU with sufficient cooling. And for what, exactly? A possible single TF performance edge, if that? Doesn't seem economically sensible imo.
I mean, monolithic GPUs already have CUs disabled for yield purposes; they can disable any number of additional CUs as needed if they wanted to run hardware-level back-compat for PS4 and PS4 Pro. And it'd be smarter than a dual 40 CU GPU design where you literally just turn one off but even both together don't deliver a substantial performance boost over a 60CU Navi chip (at least with the implementation of them Sony would do theoretically to reach the 12+/13TF claims some of these other rumors have).
So I mean, could they be using dual Oberons in the system? Maybe. But given the much lower-than-maximum performance potential they'd be using them at, it (IMHO) only lends credence to the idea that at some point, they were going with a sole 40CU chip, hence the 2019 launch rumors, the reports mentioning a delay into 2020, the rumors over a system targeting $399 specs, and even helps put into perspective some of the sudden departures from SIE and other PS divisions last year like Shawn Layden. Something would have forced them into a redesign state, maybe it was the XSX?
Personally I just find it fascinating because if indeed the dual GPU spec plays out like it does, with two Oberon (or whatever its final name will be) chips but only giving an extra TF of performance over XSX (if that), then also it humorously feels a bit like karma (if you can call it that) biting Sony. It's almost completely analogous to the rapid redesign SEGA had with the Saturn over two decades ago, after finding out PS1 specs. Partly due to being a larger company and also partly due to technology just being nowhere near as esoteric anymore in the console space (hell, AMD's doing the heavy lifting for both next-gen systems), I can see Sony succeeding in such a redesign where SEGA ultimately fumbled all those years ago....but it wouldn't change the (potential) similar factors of them leading to those respective redesigns.
Which honestly would just make me question if Sony did not pay MS or Nintendo as much of a threat to consider for next-gen, hence initially going with a more conservative design. Quite interesting really to see how these parallels can play out, it's what makes learning console history so fun. Now the other part of this equation that could make everything I'm saying moot, is if instead of dual Oberons, PS5 has a 40 CU GPU chip coupled with a smaller (say, 20 CU) chip.
Now THAT would make more sense, and allow both to reach their sweetspots while offering performance potentially either around XSX's 12TF figure or slightly more. You wouldn't essentially have two GPU clusters operating well below sweetspot efficiency, and each one could just have two CUs disabled rather than four. You would still need a sufficient cooling solution but (potentially) nothing more expensive than what the XSX is doing.
But again, that presents another problem: why haven't we uncovered any existence of that other potential GPU? It's getting pretty late and the reveal is literally around the corner. We should have at least SOME hard data on such a chip, via GPU benchmark tests, but to the best of my knowledge, nothing's come up. Hell, we even have a benchmark test for Lockhart's GPU, and we know even less about that than the PS5 xD.
EDIT: Also would like to add to my speculation, that if dual GPU with 2x Oberons and 13 or so TF were the performance targets for PS5, we wouldn't be seeing Oberon GPU benchmarks at 2GHz as late as June 2019, and we'd see at least a benchmark or two by now with Oberon around 1.42GHz (we already know the other two clocks were regression tests for PS4 and PS4 Pro).
If a dual GPU approach with Oberon and a 2nd chip half its size is the target for 12-13TF PS5, then we need a leak for that smaller chip to come out that can be tied to Oberon. Which hasn't happened yet and doesn't seem like it's going to happen. Also at some point we'd need an Oberon benchmark testing to come out with it clocked around 1.7Ghz since it'd be assumed both it and the other GPU would be clocked the same, and 1.7GHz is how you'd get to 12TF; 1.75GHz on both would push them near 12.5TF, higher than 1,8GHz would be slightly more than 13TF.