thismeinteil
Member
Well, I think it's safe to say that the Github leak needs to...

Better than D darkinstinct that says PS5 is RDNA 1.999999999999999Nope, colbert now says github oberon is rdna2 before the reveal. The flipflops and making shit up, lol.
it does. unless my math is completely off, 12 tflops doesnt seem too hot.SlimySnake
Does 12TF now seem kinda conservative/weak with the revelation of 50% better perf/watt???
Nobody was expecting anything close to those gains
Not really dx12 crap again. AMD claiming they co developed an API with Microsoft. Ray tracing is a new technology for developers in the console space. Sony making its own API that's interesting.!
We have to wait and see if PS5 can match the Xbox series ray tracing performance. Sony uses different APIs for their console, so AMD would have to be involved in development of new API to be compatible . Right now AMD only confirmed working with MS to get the best performance from their silicon.
Found a very good post (among lots of others) over on Beyond3D that should put the Ariel/Oberon stuff in much better perspective for people who don't understand what the Oberon tests were actually testing. Post is from user AbsoluteBeginner
This is basically the best working case for Oberon that can also fit what most insiders have been saying, too. That Oberon's tests were running regression tests set to Ariel iGPU. Since Ariel was a RDNA1-based chip, it did not have RT/VRS built into it. Even if Oberon has RT/VRS (in fact it's pretty damn guaranteed now after today's AMD Financials thingy), they would not be enabled for running Ariel iGPU regression; even users here like R600 mentioned this months ago.
It also would indicate that the Oberon tests that have been datamined so far do not tell everything about the chip. They may or may not mention the chip CU count (IIRC the first Oberon stepping listed "full chip" with its log), but we've already seeing later steppings change the memory controller to increase the bandwidth to the chip. We don't know if Oberon has an extra cluster of CUs disabled on the chip with later steppings beyond the very first one, but I'm thinking if there were, they would have been from the 2nd stepping onward, and I would think something like that'd call for a chip revision instead of just another stepping, but I dunno. Even so, we don't know how many additional CUs are present, if present.
And something else to consider: I saw some people mentioning AMD mentioned "multi-GHz GPUs" during a segment for GPU products and systems releasing this year? Did that happen? If so I don't think they would mention the phrase if they weren't talking 2GHz or greater, and we know Oberon has a clock at 2GHz. And now we practically know PS5 is RDNA2 which has upwards 50% more efficiency versus RDNA1. That would obviously also shift the sweetspot northward, too, which makes an RDNA2 chip at those clocks a lot more feasible. It's still something maybe crazy, but not as crazy as a lot of people were thinking before today's news, eh?
Although that actually asks an interesting question about why XSX's clocks are "so low" if RDNA2 efficiency is so much better. Either the 50% claim over RDNA1 is AMD PR talk, or MS felt no need to push the clock higher and chose guaranteed stability at a cooler GPU clock. However, that obviously also means they went with their design in the case of upping the clocks if Sony outperformed them on GPU front regarding TFs. The fact they seemingly have gone with a 1.675GHz - 1.7GHz clock on an RDNA2 chip (with the sweetspot probably shifted a good bit northward from the 1.7GHz - 1.8GHz of RDNA1) might hint that they are fairly certain they have the stronger of the two machines, but the question is now by how much? (also I kinda shamelessly took the idea of XSX clocks and their indication of anything relative to PS5 from another post over there, but I thought it was worth thinking about).
So yeah, there are still a lot of unknowns, but given Oberon E0 was tested into December of last year, I'm pretty much 100% sure Oberon is the PS5 chip. However, I'm also pretty much 100% sure we haven't really seen a benchmark testing for Oberon, just the Ariel iGPU profile regressed on Oberon, meaning we haven't seen the entirety of the chip (I think this is exactly why Matt also said "disregard it" in reference to Github, because it wasn't testing the full chip or even much anything of the chip outside of Ariel iGPU). And that's the fun part, because it can run a wide gamut. However, I think, knowing RDNA2 efficiency and XSX's pretty "tame" GPU clock, and the fact high-level MS and Sony people would know a lot more about each other's systems than any of us, that might signal MS is comfortable with the lower clock because they're fairly certain they at least have the bigger chip. Whether that means PS5 is 36/40 or (like a die estimate from a few months ago speculated) 48CUs, or maybe even to the very low 50s, is unknown.
That's why I've been rolling with 48CUs as Oberon's actual size, and they'll probably disable four for yields. @ 2GHz that actually hits around 11.26TF which is better than my earlier numbers, even. It does kinda depend on Oberon's full size being 48 however, and if they can actually keep the 2GHz clock stable because that is probably still a tad north of RDNA2's upper sweetspot range.
Either way I think we can ALMOST certainly put the 9.2TF PS5 talk to rest now, but funnily enough today's news just reaffirms the datamines, the leak and even the insiders if there's more to Oberon in terms of CUs than the initial test that showed 40 as the "full chip" (which, to be perfectly fair, could have just been referencing the Ariel iGPU profile, since Ariel is a 40CU RDNA1 chip). And being 100% fair, while I do think MS clocking XSX as low as it is (1.675GHz - 1.7GHz) is both odd and maybe indicative they're comfortable they have a performance edge over PS5, Oberon could also be a 58 or 60 CU chip if we're being honest, because again there's the whole butterfly thing and 18x3 gives you 54. So it could be more a case MS knows they have an advantage right now but Sony could have upped performance and then you get MS responding by having headroom to push their clocks higher.
Or it could even be a case that maybe MS don't know as much about PS5 as some think but they might know Oberon is also a big chip, and they want to see for certain where PS5 actually lands by throwing 12TF out there. So if PS5 reveals their number and its the same or somewhat larger, MS can enable an upclock on the GPU to match or surpass that. And I would think they have already tested the GPU at higher clocks by now just in case that type of scenario plays out. That's the other way to see their announcement from last week, anyway.
But again, it all hinges on what Oberon actually fully is, and we'll only know for sure if another benchmark test gets datamined that isn't running the chip on an Ariel iGPU profile. Which maybe could come this week, or within the next few weeks. Hopefully soon. If it does and we still see it's a max 40CU chip, then it's time for people to accept that. If it' a larger chip, but at around 48CUs, then they could either be running it with 4 CUs disabled or all 48 on and that would get them between 11.26TF - 12.28TF @ 2GHz, aka virtually identical to XSX. If it's even larger, like a 60CU chip, and they're running at @2GHz even in that case, then it just means MS can upclock the XSX at a rate they've already internally tested as a contingency plan to close the performance gap because anything beyond 2GHz with a console-like form factor is probably gonna melt silicon.
Thing is, all three of those scenarios have an even chance of playing out, and we're only going to get a better, fuller indication a few weeks from now. Don't throw away one of those possibilities even if you prefer another, because there honestly isn't a very strong reason to throw any of these scenarios out of the window just yet.
but we CAN throw out the idea PS5 isn't using RDNA2, that much is essentially official.
Not really dx12 crap again. AMD claiming they co developed an API with Microsoft. Ray tracing is a new technology for developers in the console space. Sony making its own API that's interesting.!
We have to wait and see if PS5 can match the Xbox series ray tracing performance. Sony uses different APIs for their console, so AMD would have to be involved in development of new API to be compatible . Right now AMD only confirmed working with MS to get the best performance from their silicon.
lol 9 rdna 1.0 tflops is 6 rdna 2.0 tflops.Well, I think it's safe to say that the Github leak needs to...
![]()
It's not a big failure mate. I'd say it's average/above average.Only 40-50 million xbox one sold (big failure) confirmed by AMD.
Good post. And not sure if you saw this, but I posted it a few days ago here. It‘s an AMD link that shows in AMD terminology, a letter change is a full model revision, each with a new model number. The number changes are the steppings. https://www.neogaf.com/threads/next...-analysis-leaks-thread.1480978/post-257184963
Now I believe in GitHub
9.2TFs + 50% = 13.8 TFs
It was true since beginning![]()
Found a very good post (among lots of others) over on Beyond3D that should put the Ariel/Oberon stuff in much better perspective for people who don't understand what the Oberon tests were actually testing. Post is from user AbsoluteBeginner
This is basically the best working case for Oberon that can also fit what most insiders have been saying, too. That Oberon's tests were running regression tests set to Ariel iGPU. Since Ariel was a RDNA1-based chip, it did not have RT/VRS built into it. Even if Oberon has RT/VRS (in fact it's pretty damn guaranteed now after today's AMD Financials thingy), they would not be enabled for running Ariel iGPU regression; even users here like R600 mentioned this months ago.
It also would indicate that the Oberon tests that have been datamined so far do not tell everything about the chip. They may or may not mention the chip CU count (IIRC the first Oberon stepping listed "full chip" with its log), but we've already seeing later steppings change the memory controller to increase the bandwidth to the chip. We don't know if Oberon has an extra cluster of CUs disabled on the chip with later steppings beyond the very first one, but I'm thinking if there were, they would have been from the 2nd stepping onward, and I would think something like that'd call for a chip revision instead of just another stepping, but I dunno. Even so, we don't know how many additional CUs are present, if present.
And something else to consider: I saw some people mentioning AMD mentioned "multi-GHz GPUs" during a segment for GPU products and systems releasing this year? Did that happen? If so I don't think they would mention the phrase if they weren't talking 2GHz or greater, and we know Oberon has a clock at 2GHz. And now we practically know PS5 is RDNA2 which has upwards 50% more efficiency versus RDNA1. That would obviously also shift the sweetspot northward, too, which makes an RDNA2 chip at those clocks a lot more feasible. It's still something maybe crazy, but not as crazy as a lot of people were thinking before today's news, eh?
Although that actually asks an interesting question about why XSX's clocks are "so low" if RDNA2 efficiency is so much better. Either the 50% claim over RDNA1 is AMD PR talk, or MS felt no need to push the clock higher and chose guaranteed stability at a cooler GPU clock. However, that obviously also means they went with their design in the case of upping the clocks if Sony outperformed them on GPU front regarding TFs. The fact they seemingly have gone with a 1.675GHz - 1.7GHz clock on an RDNA2 chip (with the sweetspot probably shifted a good bit northward from the 1.7GHz - 1.8GHz of RDNA1) might hint that they are fairly certain they have the stronger of the two machines, but the question is now by how much? (also I kinda shamelessly took the idea of XSX clocks and their indication of anything relative to PS5 from another post over there, but I thought it was worth thinking about).
So yeah, there are still a lot of unknowns, but given Oberon E0 was tested into December of last year, I'm pretty much 100% sure Oberon is the PS5 chip. However, I'm also pretty much 100% sure we haven't really seen a benchmark testing for Oberon, just the Ariel iGPU profile regressed on Oberon, meaning we haven't seen the entirety of the chip (I think this is exactly why Matt also said "disregard it" in reference to Github, because it wasn't testing the full chip or even much anything of the chip outside of Ariel iGPU). And that's the fun part, because it can run a wide gamut. However, I think, knowing RDNA2 efficiency and XSX's pretty "tame" GPU clock, and the fact high-level MS and Sony people would know a lot more about each other's systems than any of us, that might signal MS is comfortable with the lower clock because they're fairly certain they at least have the bigger chip. Whether that means PS5 is 36/40 or (like a die estimate from a few months ago speculated) 48CUs, or maybe even to the very low 50s, is unknown.
That's why I've been rolling with 48CUs as Oberon's actual size, and they'll probably disable four for yields. @ 2GHz that actually hits around 11.26TF which is better than my earlier numbers, even. It does kinda depend on Oberon's full size being 48 however, and if they can actually keep the 2GHz clock stable because that is probably still a tad north of RDNA2's upper sweetspot range.
Either way I think we can ALMOST certainly put the 9.2TF PS5 talk to rest now, but funnily enough today's news just reaffirms the datamines, the leak and even the insiders if there's more to Oberon in terms of CUs than the initial test that showed 40 as the "full chip" (which, to be perfectly fair, could have just been referencing the Ariel iGPU profile, since Ariel is a 40CU RDNA1 chip). And being 100% fair, while I do think MS clocking XSX as low as it is (1.675GHz - 1.7GHz) is both odd and maybe indicative they're comfortable they have a performance edge over PS5, Oberon could also be a 58 or 60 CU chip if we're being honest, because again there's the whole butterfly thing and 18x3 gives you 54. So it could be more a case MS knows they have an advantage right now but Sony could have upped performance and then you get MS responding by having headroom to push their clocks higher.
Or it could even be a case that maybe MS don't know as much about PS5 as some think but they might know Oberon is also a big chip, and they want to see for certain where PS5 actually lands by throwing 12TF out there. So if PS5 reveals their number and its the same or somewhat larger, MS can enable an upclock on the GPU to match or surpass that. And I would think they have already tested the GPU at higher clocks by now just in case that type of scenario plays out. That's the other way to see their announcement from last week, anyway.
But again, it all hinges on what Oberon actually fully is, and we'll only know for sure if another benchmark test gets datamined that isn't running the chip on an Ariel iGPU profile. Which maybe could come this week, or within the next few weeks. Hopefully soon. If it does and we still see it's a max 40CU chip, then it's time for people to accept that. If it' a larger chip, but at around 48CUs, then they could either be running it with 4 CUs disabled or all 48 on and that would get them between 11.26TF - 12.28TF @ 2GHz, aka virtually identical to XSX. If it's even larger, like a 60CU chip, and they're running at @2GHz even in that case, then it just means MS can upclock the XSX at a rate they've already internally tested as a contingency plan to close the performance gap because anything beyond 2GHz with a console-like form factor is probably gonna melt silicon.
Thing is, all three of those scenarios have an even chance of playing out, and we're only going to get a better, fuller indication a few weeks from now. Don't throw away one of those possibilities even if you prefer another, because there honestly isn't a very strong reason to throw any of these scenarios out of the window just yet.
but we CAN throw out the idea PS5 isn't using RDNA2, that much is essentially official.
Navi 2X not being Navi 10 Lite, which Oberon is, is the last nail in the GitHub leaks coffin.Good lord man. The other place is having a meltdown over the PS5 being RDNA 2. Doesn’t that basically confirm that the Github information doesn’t have any conclusive evidence about PS5?
i'm pretty sure MS can clock XSX higher but they decided to stick with 12 TF also if they clock it higher especially for bigger chips will bring them worse yields and i believe thay want it the console quiterBut the XSX APU is huge, unless they are severely underclocking it then 14TF seems possible
That’s because Sony hasn’t announced their specs yet like Microsoft did. And this has to be talked about for the PC arena like MS did when working with nVidia for it.
Both companies are assigned teams from AMD to work on the silicon and customizations. This isn’t Sony’s first rodeo with ray tracing. They’ve been showing it off every generation since the PS2 at every GDC. Been talking about RT since the PS2 days, and heavier from the PS3 gen on.
Cerny has been talking about it for over a decade. What do you think they will be doing? Sitting on their hands?
Sony already has it's own API for PS4 .
When comes to RT they could use vulkan or OpenGL or even do there own thing which they have been doing since PS1.
Navi 2X not being Navi 10 Lite, which Oberon is, is the last nail in the GitHub leaks coffin.
I agree wholeheartedly that's your opinion but many have their own as well. Mine being that it's below average, the amount of R&D Microsoft spent and no to mention marking. The effort does not make the results.It's not a big failure mate. I'd say it's average/above average.
I said it earlier Sony's API solutions are either as good or better smfh. Dx12 gave no advantage as Sony created theur own apis tht were great and efficient.
Like ppl don't get tired of spewing same shit every gen? Xbox will have no advantage on the software API front Sony has some of the smartest and best engineers and software ppl console wise in the industry, competent enough to create apis on par or superior and features including fimor ray-tracing.. Sony has been involved in ray-tracing for years.
ah i knew i was missing something.BTW the math for the 50% perf/watt works like that.
1/1.5 = 66%
That means the 50% perf/watt is equal to 66% of power consumption from RDNA to RDNA 2.
Eg.
RDNA: 200W
RDNA2: 133W
Both at same TFs.
Mental gymnastics... Do you mean subtraction?That sure is some mental gymnastics he is doing.
Github leak still got MI100, Renoir,Arden right %100 also after today news RDNA 2 can handle high clocks very well yeah....at least it's better than fake insiders on this siteWell, I think it's safe to say that the Github leak needs to...
![]()
BTW the math for the 50% perf/watt works like that.
1/1.5 = 66%
That means the 50% perf/watt is equal to 66% of power consumption from RDNA to RDNA 2.
Eg.
RDNA: 200W
RDNA2: 133W
Both at same TFs.
Using the RX 5700XT 225W the same chip (CUs, clock, etc) in RDNA 2 should have a TDP around 150W.
Github leak still got MI100, Renoir,Arden right %100 also after today news RDNA 2 can handle high clocks very well yeah....at least it's better than fake insiders on this site
![]()
Github leak still got MI100, Renoir,Arden right %100 also after today news RDNA 2 can handle high clocks very well yeah....at least it's better than fake insiders on this site
![]()
Sony already has it's own API for PS4 .
When comes to RT they could use vulkan or OpenGL or even do there own thing which they have been doing since PS1.
Full quote
We also provide lower-level API support, that gives more control to the developers so that they can extract more performance from the underlying hardware platform. This will help mitigate the [ray tracing] performance concern. [...] The latest Microsoft DXR 1.1 API was co-architected and co-developed by AMD and Microsoft to take full advantage of the full ray tracing architecture.
AMD co- architected and co developed for Sony or they did not. Just something to watch out for. Ray tracing is not performance free!
Maybe MS left some room for clock.Which begs the question, why is XSX only 12TF, has a massive APU die size, and a massive tower design????
I think he's trying to reach light speed...or warp 10.Better than D darkinstinct that says PS5 is RDNA 1.999999999999999
I believe Sony already has his own API for RTX... that is why the Sony dev was asking if MS fixed the DXR/RDNA2 limitations with Ray-tracing.Full quote
We also provide lower-level API support, that gives more control to the developers so that they can extract more performance from the underlying hardware platform. This will help mitigate the [ray tracing] performance concern. [...] The latest Microsoft DXR 1.1 API was co-architected and co-developed by AMD and Microsoft to take full advantage of the full ray tracing architecture.
AMD co- architected and co developedan API for Sony or they did not. Just something to watch out for. Ray tracing is not performance free!
the die just means MS went wide and slow, and someone thought that tower was a good idea. lol.Which begs the question, why is XSX only 12TF, has a massive APU die size, and a massive tower design????
I believe Sony already has his own API for RTX... that is why the Sony dev was asking if MS fixed the DXR/RDNA2 limitations with Ray-tracing.
Killed by their own godHere's the _rogame tweet saying it is Navi 10, if anyone wants to see it
![]()
So it's confirmed PS5 will use RDNA 2 or not ?
To act as if Xbox didn't catch up to Sony in the 360 PS3 generation is just being disingenuous. That is also a mighty strong reaction to someone suggesting a path for MS to be competitive again.Catch up again? Ps3 outsold Xbox 360 world wide almost every single month during its lifetime with less reputable online, online going down, inferior multiplats and being hundreds more. Nothing Xbox does will bring them close to catching up as long as Sony doesn't have a monumental fuck up.
The dev was talking about ray-tracing on RDNA2 plus DXR.I'm sure thy do thy e been working with Ray tracing on this console abd testing it out for their games. I mean thy even had software based ray-tracing from like PS2 days lol
no, they clearly spoke about 2 xbox consoles, and maybe switch pro? does not matter that 2 of those 3 are not announcedSo it's confirmed PS5 will use RDNA 2 or not ?
the die just means MS went wide and slow, and someone thought that tower was a good idea. lol.
i bet thats a 200w console like the ps3. max.
Consoles will use custom APU they will add what client tell AMD to addThey also said it was a Navi 10, where AMD confirmed today that both are using Navi 2x, so...
i missed the good half of the conference. can you please tell me if this was explicitly stated? or was it more obfuscation like that amd engineer who said both consoles have native rt but didnt say if they were both rdna 2.0.Confirmed.
Consoles will use custom APU they will add what client tell to add
right now we know oberon will have RDNA 2.0 Features, what we don't know if Cerny and his were able to add more CU to later revisions
Here's the _rogame tweet saying it is Navi 10, if anyone wants to see it
I think almost everyone here has an idea tht Ray tracing will affect performance to a great capacity even with apis mitigating. Both companies will have apis helping with that. Many comments such as "oh so if Ray tracing is on a game might be 1080p 30fps but with it off it might be 4k 60fps? Type comments I've heard around here.
yep. i am pretty sure ive mentioned this before but i think both are going to get into a battle of clocks as we approach the console releases. MS might try and hit 13 tflops too.Seems like they have a ton of headroom to go wide and relatively fast
Didn't Osiris once say Sony tools are currently more advanced than MS'?Holy shit dude where have you been the last 7+ years especially with the design with the PS4?They have an entire team of competent architects working on the damn thing.
They have their own in-house APIs, game engines, toolsets, and everything under the sun as any other platform.
Are you being purposely obtuse, or?