• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.

jose4gg

Member
I generally agree with what you said, but don't you find it a little disingenuous to frame the XSX GPU as sacrificing something to reach their CU count. Their GPU is running at 1800 MHz IIRC. Is that not considered a high clock speed to run sustained in the GPU space? This is a genuine question.

Sorry if what I wrote can be confusing, by no means the clock speed in the xsx is slow.

What I wanted to say is that it cannot be faster because the heat produced for higher clocked speeds at a fixed clock strategy is too much for our current cooling mechanism.

Again, I just pointed at the cost because we are no simply comparing 52 Cu and 36 Cu, these CUs are different and that difference is worth to point out.
 
Last edited:

Ptarmiganx2

Member
how about this for evidence?

Cross gen Foliage:
Ln81HzB.gif

Q5GBjVF.gif

Next gen Foliage:
HmOedDT.gif

Cross gen wild life:
tumblr_ppl8j3WFQv1t4wjzko2_r2_500.gifv

Next Gen Wild life:
yPWXmMK.gif


Cross gen Character models:
pO4SPEJ.gif

Next gen character models:
W5MjoKo.gif


And this is just me comparing it to ps5 games. hellblade 2 is such a massive leap over halo and other cross gen games that no one here believed it was realtime.

I admire the humility and you extending an olive branch, but why dont you expect next gen games to NOT look next gen compared to cross gen ms will be releasing for the first two years? Hellblade 2 clearly shows the next gen only games by MS will look better than cross gen games by MS. So why arent we assuming next gen games on ps5 wont look better than cross gen games on xss?
The first Hellblade was a great looking game. However, it‘s on pretty tight rails and the environment mostly lifeless.
 
This is the first game of the series we know UBISOFT won’t hold the game for a couple of months in order to make the engines compatible with the new machines, This is a yearly saga.

The next AC will look better and run better I hope
Yup. Next year we'll get another and it will look fantastic. Granted, I personally was totally over the series after Brotherhood, but people still love it I guess. I think one of the worst things holding back the series besides what everyone here could point out over the years: would definitely be current gen CPU. 360/PS3 to Jaguar, was barely a drop of improvement in the CPU department. With all the extra CPU power these next-gen systems have, it would be a huge disappointment if the series continued on with the same stale and sterile gameplay. Whatever next-gen improvements come, I still don't know if it will be enough for me to drop back in if it's still aiming to be just a map filling side quest adventure.

Whenever I see an Assassin's Creed game, I can't help but think of people posting before and after shot of cleaning up a local beach or whatever on Reddit.
 

silent head

Member
Series X "Machine Learning Performance":

XSX
INT8 = 49 TOPS
INT4 = 97 TOPS

RTX 2080Ti Founders Edition
INT8 = 227.7 TOPS
INT4 = 455.4 TOPS

RTX 2080 Founders Edition
INT8 = 169.6 TOPS
INT4 = 339.1 TOPS

RTX 2070 Founders Edition
INT8 = 126 TOPS
INT4 = 252.1 TOPS

RTX 2060
INT8 = ~100 TOPS
INT4 = ~200 TOPS

So yeah, around half that of a RTX 2060. Don't know how significant this really is and how this is going to play out.


dVxVQ7U.gif
 
Last edited:
what about likes of halo infinite (we know nothing so far about it and how was it developed - what we know that the engine built for it was targeting next gen, it might be actually full blown next gen title), there is the medium which will be the next gen exclusive available at launch (and via game pass), so your statement “no next gen games” is a bit of a stretch

btw. Serious question, do we know what next gen games will Sony have ready for ps5 launch? I know that Spider-Man MM is coming at launch but not sure if I can recall any other next gen title that was announced for holiday 2020 release
That's normal since very few dates were provided. However you have seen a few next gen exclusive games at the PS5 showcase, whether they release at launch day, window or year.
 

Neo Blaster

Member
Because people are exagerating or trying to be funny. My PS4 is quite too. Last game i played on it was Tlou2. It is a running gag at this point. "looking at this trailer, i hope the fans won't propell my PS4 into orbi" badum tsssss. People are whoring for likes on social media :messenger_tears_of_joy:
So, people have no right to complain about the noise, and if they do they are attention whores? Then I think all that talk Cerny had about noise on his presentation was just him whoring for attention, what a bitch!!
 
Last edited:
I generally agree with what you said, but don't you find it a little disingenuous to frame the XSX GPU as sacrificing something to reach their CU count. Their GPU is running at 1800 MHz IIRC. Is that not considered a high clock speed to run sustained in the GPU space? This is a genuine question.
The question on the XSX side is not the clock, but which % of utilization they can leverage of 50+ CUs. Like Cerny stated, it's much easier to have high utilization of a narrower, but faster GPU. More rays produced if you're into raytracing, that should be a benefit of the XSX. Fewer calculations once the rays are bouncing across surfaces when compares to the PS5 due to slower clocks. I mean... it's hard to say how visible the difference will be, at least to me.
 

Cloaka

Neo Member
Series X "Machine Learning Performance":

XSX
INT8 = 49 TOPS
INT4 = 97 TOPS

RTX 2080Ti Founders Edition
INT8 = 227.7 TOPS
INT4 = 455.4 TOPS

RTX 2080 Founders Edition
INT8 = 169.6 TOPS
INT4 = 339.1 TOPS

RTX 2070 Founders Edition
INT8 = 126 TOPS
INT4 = 252.1 TOPS

RTX 2060
INT8 = ~100 TOPS
INT4 = ~200 TOPS

So yeah, around half that of a RTX 2060. Don't know how significant this really is and how this is going to play out.


dVxVQ7U.gif

Good enough to use DLSS/DirectML or other small task for games.
 

jose4gg

Member
The question on the XSX side is not the clock, but which % of utilization they can leverage of 50+ CUs. Like Cerny stated, it's much easier to have high utilization of a narrower, but faster GPU. More rays produced if you're into raytracing, that should be a benefit of the XSX. Fewer calculations once the rays are bouncing across surfaces when compares to the PS5 due to slower clocks. I mean... it's hard to say how visible the difference will be, at least to me.

Yea, and again... that’s does not mean is “hard” to fulfill the 52 CU, but at the times where games left 2-6-10 CUs without utilization we can expect all the CU from PS5 been used.

It not that XBX is hard to fill it’s CU, it’s that fewer CU will be always be easier to fill.
 
Last edited:

Stuart360

Member
Yea, and again... that’s does not mean is “hard” to fulfill the 52 CU, but at the times where games left 2-6-10 CUs without utilization we can expect all the CU from PS5 been used.

It not that XBX is hard to fill it’s CU, it’s that fewer CU will be always be easier to fill.
Sometimes Nvidia gpus have lower clocks on higher end cards compared to the lower end cards in the same family. The higher end cards are still way more powerful though, due to the extra cu's and tech.
 

Dolomite

Member
Both consoles are Custom RDNA2, they are both Hybrids. When amd release the PC part, we will see the differences.

Your hooked on Sony not Full RDNA2 and is custom, But fail to omit that MS is also custom RDNA2.

Your trying to console war based on no information about PC RDNA2, MS RDNA2 custom and PS5 RDNA2 custom.

Why have we not seen the PC RDNA2 yet ? What are AMD adding and why is it taking longer than consoles ? Amd could be doing tensor cores or their own DLSS.
Yes and no. You're correct in that both are custom iterations RDNA2, you're incorrect in asserting this makes them both hybrids. MS did not an RDNA1-2 RDNA hybrid. Using features that have been made obsolete or replaced altogether with the jump to RDNA2 like prim shaders, makes it the PS5's RDNA a hybrid. The lack of ML, the lack of VSR, Mesh shaders are all features that Sony would have had to consciously exclude and replace from what was already available with RDNA2 for what you're asserting to be true. Sony said "nah we don't need these new mesh shaders, let's replace that with prim shaders instead" according to you
 
Last edited:

geordiemp

Member
The difference is MS's is a custom RDNA2. Not RDNA hybrid. Using features that have been made obsolete or replaced altogether with the jump to RDNA2 like prim shaders

Yes and no. You're correct in that both are custom iterations RDNA2, you're incorrect in asserting this makes them both hybrids. MS did not an RDNA1-2 RDNA hybrid. Using features that have been made obsolete or replaced altogether with the jump to RDNA2 like prim shaders, makes it the PS5's RDNA a hybrid. The lack of ML, the lack of VSR, Mesh shaders are all features that Sony would have had to consciously exclude and replace from what was already available with RDNA2 for what you're asserting to be true. Sony said "nah we don't need these new mesh shaders, let's replace that with prim shaders instead" according to you

Your spreading FUD and talking crap.

What does Mesh shaders do ? What does New ps5 geometry engine do ? Whats the difference ?#

You dont know do you.

Sony dont change names for certain things, they just call it the the new Geometry engine. I guess its confusing to some - Cerny states what the new geometry engine does, backface culling, procedural generation and smooth shading and he stated its new for ps5 along with cache scrubbers and again Coherency engine.

Both consoles name things differently, so fucking what, its what function it performs and how usefuil is it.
 
Last edited:

jose4gg

Member
Sometimes Nvidia gpus have lower clocks on higher end cards compared to the lower end cards in the same family. The higher end cards are still way more powerful though, due to the extra cu's and tech.

Yes, we can see this specially when using things like Ray-tracing, but NVIDEA high end cards are not just more CU with less clock speed. There are multiple things that are higher in the high ends GPUs, memory/bandwidth is one key area where they shine too.
 
Last edited:

geordiemp

Member
VRS
MESH
ML all RDNA features. All missing.
And again, TO BE CLEAR if I'm wrong I'm wrong 🤷🏾‍♂️ I won't live and die in this hill, but I'll always look at marketing critically and research for myself.

What does mesh shares do ? Oh backface culling, what else is new ?

What did cerny say about the new geormetry engine - backface culling

Maybe that does not count from Cerny because he did not give it a fancy name for warriors to spout about.

Whats next, ML

Cerny said new GE does procedural Geometry on the fly - I wonder what that could be, um, let me think, its a hard one.....Does it have a fancy name warriors can bring up ?

Whatever floats your boat. I can link you to Cerny presentation - go listen road to Ps5 as your research is currently standing at 0, and when he says back face culling you can say mesh shaders in your head, and when he.....blah blah.
 
Last edited:

M1chl

Currently Gif and Meme Champion
Series X "Machine Learning Performance":

XSX
INT8 = 49 TOPS
INT4 = 97 TOPS

RTX 2080Ti Founders Edition
INT8 = 227.7 TOPS
INT4 = 455.4 TOPS

RTX 2080 Founders Edition
INT8 = 169.6 TOPS
INT4 = 339.1 TOPS

RTX 2070 Founders Edition
INT8 = 126 TOPS
INT4 = 252.1 TOPS

RTX 2060
INT8 = ~100 TOPS
INT4 = ~200 TOPS

So yeah, around half that of a RTX 2060. Don't know how significant this really is and how this is going to play out.


dVxVQ7U.gif
Honestly Machine training and then act according to train model, requires vastly different performance obviously.
 

Brudda26

Member
VRS
MESH
ML all RDNA features. All missing.
And again, TO BE CLEAR if I'm wrong I'm wrong 🤷🏾‍♂️ I won't live and die in this hill, but I'll always look at marketing critically and research for myself.
Mesh shaders and primitive shaders are the same thing. AMD call them primitive shaders, AMD will still call them primitive shaders on RDNA2 because that's the term they use. Sony obviously have done some customisation in that area to as it's called geometry engine by them. Different companies use different terminology.

PS5 also uses some form of ML considering EA has already mentioned it. It might not be RDNA derived and GPU based. Sony interactive entertainment(the playstation division)have patented alot of machine learning stuff.
 
Last edited:

ZywyPL

Banned
Yea, and again... that’s does not mean is “hard” to fulfill the 52 CU, but at the times where games left 2-6-10 CUs without utilization we can expect all the CU from PS5 been used.

It not that XBX is hard to fill it’s CU, it’s that fewer CU will be always be easier to fill.

Exactly, more CUs never hurt, you are guaranteed to have that extra computing power whenever needed, like to maintain solid framerate or native resolution, whereas if a scene needs 40-44 for example but you have only 36 at your disposal, then sacrifices will have to be made somewhere. Bottom line is, if it was truly hard to fill all those CUs, then cards like 2080ti wouldn't have been so much ahead of the rest of the GPUs. Especially at 4K where the performance is solely GPU bound.
 

Neo Blaster

Member
It's not about them being liars.

It's about their audience. The average gamer would barely understand RDNA 1 or RDNA 2, let alone the complexities that come with "RDNA1 with some RDNA2 features". It's far easier to just say, RDNA2. It's not so much about lying.
So, Lisa Su and Cerny are not liars, but laymen patronizers?

Now I've just seen everything here.
 
Last edited:

geordiemp

Member
Mesh shaders and primitive shaders are the same thing. AMD call them primitive shaders, AMD will still call them primitive shaders on RDNA2 because that's the term they use. Sony obviously have done some customisation in that area to as it's called geometry engine by them. Different companies use different terminology.

PS5 also uses some form of ML considering EA has already mentioned it. It might not be RDNA derived and GPU based. Sony interactive entertainment(the playstation division)have patented alot of machine learning stuff.

Reminds me of this, you have to give it name, 7 minute abs.

 

thelastword

Banned
This is still a speculation and rumor thread, but I suspect some people have committed themselves to diminish any aspect of PS5 as much as they can. It's not that are discussing in good faith because their arguments are not coming from a place of logic or reference point (mainly history). In essence what has happened before or trends relative to Sony's hardware design.


1.) How could PS5 be RDNA 1, when it's doing raytracing, has a more advanced method of VRS than RDNA 2, since it has an entire geometry engine witch cache scrubbers? If PS5 is RDNA 1, why can it do raytracing. Can I bel linked to one Navi 1 card that has built in RT capabilities like RDNA 2 or the next gen consoles including PS5? You see this whole RDNA 1 bit from certain posters is only used to suggest that PS5 is not as advanced as Series X and that has been happening for a while now, Cerny has confirmed, Lisa Su has confirmed but people need to get an edge somehow. Apparently the 16% TF divide and the 3% CPU divide is not enough.


2) Sony has a history of going cutting edge in every generation, they use the latest features available....When other consoles brought proprietary discs like Dreamcasts GD Rom and GC's 1.8GB discs......Sony went with 4.7 GB DVD's, had a tonne of USB ports, firewire you name, went gungho with the cpu and gpu design in house.....For PS3, only console to come with a tonne of features like wifi, top tier sound and media features including Bluray, (which was actually a new format, unlike this gen), again, lots of innovation with the Cell CPU in-house. PS4, 8GB DDR5 when most people were saying 4GB would be overkill, just like the same people trying to undersell the fast SSD this time, it came with lots of day 1 features which built up lot so popularity on consoles; like Screenshots, Game Recording and Streaming with Live with PlayStation, Share button, speaker on controller, Track Pad, More CU's for GPGPU and of course extra memory internally for media features and an extra 20GB/S bus to streamline the CPU/GPU/MEM....Then the PRO with it's butterfly setup 64 Rops, Vega features before Vega released, FP 16/RPM, ID buffer, Primitive Discard Accelerator that dealt with the triangle load and efficiency and the DCC dealing with better compression in the pipeline. Even the work distributor had features ahead of polaris, improved tesselation as one..... So you see, Sony has had a history of pushing their customization work in their consoles with the latest technology everytime....


So how should we be looking at the PS5. If Cerny has a more advanced/customized system to offload triangles and details not in immediate camera view or in view of what the player is focusing on whilst in-game, why should it be called less than RDNA 2? It's better...Primitive shading, a geometry engine, cache scrubbing, then with the insane speeds of their SSD and an insanely fast and versatile DMAC. PS will be able to draw things on screen incredibly fast, especially when certain details and geometry is culled from view. It also helps that the GPU at 2.23GHZ is incredibly fast due to it's high clocks, so there will be less lag and frame hitching at run-time.....Everything works in tandem, a very well oiled system.

The point is, in every generation, Sony engineers have given us features which never existed on consoles before. They are the pioneers of cutting edge tech or new features......They are using RDNA 2's RT, but felt their Geometry Engine would be better served to do VRS, they are probably taking hooks from the RDNA 3 subset, which has not even been discussed or revealed yet, that's according to rumor. They will use what they need for their console based on their vision for it, which is no bottlenecks, a well oiled system, developed for devs, lowest time to triangle etc....So can we really call an advanced VRS system, RT, 7nm EUV, clocks hitting 2.23Ghz and could even hit higher clocks as simply RDNA 1.5? Based on these advancements It's nothing short of RDNA 2+, because it's based on RDNA 2 with some methods improved specifically for the PS5....
 

3liteDragon

Member
I think the main reason (just my guess) MS went with 52 CU's for Series X is because they're also gonna be using those APU's for xCloud servers as well. Series X's APU is capable of running 4 Xbox One S game sessions simultaneously, and when it comes to server-based computing, there's nothing better than more parallel processing (AKA adding more CU's instead of increasing clock speeds).
 
The question on the XSX side is not the clock, but which % of utilization they can leverage of 50+ CUs. Like Cerny stated, it's much easier to have high utilization of a narrower, but faster GPU. More rays produced if you're into raytracing, that should be a benefit of the XSX. Fewer calculations once the rays are bouncing across surfaces when compares to the PS5 due to slower clocks. I mean... it's hard to say how visible the difference will be, at least to me.
That's really the only reasonable question to ask considering how close these systems are in terms of GPU. It's gonna be one of those nit-picky Digital Foundry things.
 

ToadMan

Member
The soon people accepts PS5 is less powerful than XSX the better.


15% less powerful to be specific - the closest it’s been between an Xbox and a PlayStation....

For some people, it seems that a game console is just a set of CUs and an oscillating crystal surrounded by a lot of useless silicon and plastic.

Those would be people relying on tflops as the sole indicator of system performance ...
 
I think the main reason (just my guess) MS went with 52 CU's for Series X is because they're also gonna be using those APU's for xCloud servers as well. Series X's APU is capable of running 4 Xbox One S game sessions simultaneously, and when it comes to server-based computing, there's nothing better than more parallel processing (AKA adding more CU's instead of increasing clock speeds).

Don't know if they need to emulate the 1S CUs in some way, but if they do, that would have probably made 48CUs the bare minimum. I wonder how they schedule the CPU to make that all happen, they'd be trying to run code for what, 28 jaguar threads?
 
It seems everyone keeps talking about the "features" of the PS5 that have yet to be revealed that narrows the gap between it and the XSX(which I really don't believe will be THAT large)

I just wonder what Earth-shattering features they could divulge that make it so, short of say an upgrade to the specs *since* the preliminary GDC presentation or the ever-present "secret sauce".


Moreover, has anyone else felt that this has been the absolute longest time frame to hear about the final consoles?

Microsoft's upcoming presentation will focus specifically on games(which is fine), but we're now looking at an unspecified time in August for BOTH companies to finally reveal what they will cost, what their entire feature line will be, when they'll be available, etc.

I was telling my best friend from college it's made it harder on me to decide, as we could possibly be as little as 3 months to launch with no idea of price, features, launch titles, etc. Conversely, in years past we had far more information I feel. Covid really messed up a lot of things.
 

Doncabesa

Member
If you have someone like @Bernkastel still around here on Gaf, those FUD threads about PS will always pop up. Imagine being this invested in your console, that X discord group is really pathetic. Even calling Mod of War Mod of War a pile of shit, very edgy.

qFMFQ83.png
Nice doctored picture, you're missing all of the custom emojis which makes it obvious. That conversation never happened.
 
Last edited:
I think the main reason (just my guess) MS went with 52 CU's for Series X is because they're also gonna be using those APU's for xCloud servers as well. Series X's APU is capable of running 4 Xbox One S game sessions simultaneously, and when it comes to server-based computing, there's nothing better than more parallel processing (AKA adding more CU's instead of increasing clock speeds).

👆 this!

Listen to what this man just said.

I 100% it.
 
This exactly this, it likely does not have any of those features.

The XSX could benefit from some of those features just like the PS5 could benefit from some of the features the XSX has. Then again maybe some features are not needed on the other system due to the differences in hardware. That's how im seeing all this.
 
Last edited:

Brudda26

Member
The thing I want to know is how many ACE each consoles have. Asynchronous compute is going to be something to keep an eye on as epic want to push it. From what I recall UE5 is tooled to make good use of it. Games that leverage ASYNC compute more would have a advantage on PS5 as they scale with clocks and not CU count.
 

Mr Moose

Member
halo started dev in 2015. its not built for anything other than 2015 hardware. that should be common sense.

you are right about medium. i totally forgot about that.
The Medium isn't first party.
There's Hellblade II, but who knows if that's also coming to Xbox One (it says "Built for the new Xbox Series X").
 

Lethal01

Member
On the recent special fucking sauce vs peak hot lie:

05dacb62-177c-4423-8majvc.jpeg

Yeah but.. teraflops are a terrible measurement of performance as is just looking the max bandwidth.

This secret sauce meme is beyond old already. The PS5 has features that most agree let it be far more useful in gaming than an SSD listed to have several times the raw throughput.

Stop trying to dismiss the idea that there could be hardware and software that make things work better than they are.

Every console has a thousand things that could be considered a "secret sauce" but they stop being secrets when they launch.
 
Last edited:

LED Guy?

Banned
- Xbox Series X has a more powerful GPU with higher memory bandwidth than PS5.

- PS5 is a full RDNA 2 console, has VRS (Activision confirmed it), has ML (EA confirmed it), confirmed by Epic Games, AMD, Sony, developers etc...

- PS5 DESTROYS Xbox Series X in SSD architecture & speed, it’s not even comparable!!

Guys, just accept these facts, get it through your thick skulls 💀💀💀💀💀

😂😂🤣😂😂😂
 
Last edited:
This is still a speculation and rumor thread, but I suspect some people have committed themselves to diminish any aspect of PS5 as much as they can. It's not that are discussing in good faith because their arguments are not coming from a place of logic or reference point (mainly history). In essence what has happened before or trends relative to Sony's hardware design.


1.) How could PS5 be RDNA 1, when it's doing raytracing, has a more advanced method of VRS than RDNA 2, since it has an entire geometry engine witch cache scrubbers? If PS5 is RDNA 1, why can it do raytracing. Can I bel linked to one Navi 1 card that has built in RT capabilities like RDNA 2 or the next gen consoles including PS5? You see this whole RDNA 1 bit from certain posters is only used to suggest that PS5 is not as advanced as Series X and that has been happening for a while now, Cerny has confirmed, Lisa Su has confirmed but people need to get an edge somehow. Apparently the 16% TF divide and the 3% CPU divide is not enough.


2) Sony has a history of going cutting edge in every generation, they use the latest features available....When other consoles brought proprietary discs like Dreamcasts GD Rom and GC's 1.8GB discs......Sony went with 4.7 GB DVD's, had a tonne of USB ports, firewire you name, went gungho with the cpu and gpu design in house.....For PS3, only console to come with a tonne of features like wifi, top tier sound and media features including Bluray, (which was actually a new format, unlike this gen), again, lots of innovation with the Cell CPU in-house. PS4, 8GB DDR5 when most people were saying 4GB would be overkill, just like the same people trying to undersell the fast SSD this time, it came with lots of day 1 features which built up lot so popularity on consoles; like Screenshots, Game Recording and Streaming with Live with PlayStation, Share button, speaker on controller, Track Pad, More CU's for GPGPU and of course extra memory internally for media features and an extra 20GB/S bus to streamline the CPU/GPU/MEM....Then the PRO with it's butterfly setup 64 Rops, Vega features before Vega released, FP 16/RPM, ID buffer, Primitive Discard Accelerator that dealt with the triangle load and efficiency and the DCC dealing with better compression in the pipeline. Even the work distributor had features ahead of polaris, improved tesselation as one..... So you see, Sony has had a history of pushing their customization work in their consoles with the latest technology everytime....


So how should we be looking at the PS5. If Cerny has a more advanced/customized system to offload triangles and details not in immediate camera view or in view of what the player is focusing on whilst in-game, why should it be called less than RDNA 2? It's better...Primitive shading, a geometry engine, cache scrubbing, then with the insane speeds of their SSD and an insanely fast and versatile DMAC. PS will be able to draw things on screen incredibly fast, especially when certain details and geometry is culled from view. It also helps that the GPU at 2.23GHZ is incredibly fast due to it's high clocks, so there will be less lag and frame hitching at run-time.....Everything works in tandem, a very well oiled system.

The point is, in every generation, Sony engineers have given us features which never existed on consoles before. They are the pioneers of cutting edge tech or new features......They are using RDNA 2's RT, but felt their Geometry Engine would be better served to do VRS, they are probably taking hooks from the RDNA 3 subset, which has not even been discussed or revealed yet, that's according to rumor. They will use what they need for their console based on their vision for it, which is no bottlenecks, a well oiled system, developed for devs, lowest time to triangle etc....So can we really call an advanced VRS system, RT, 7nm EUV, clocks hitting 2.23Ghz and could even hit higher clocks as simply RDNA 1.5? Based on these advancements It's nothing short of RDNA 2+, because it's based on RDNA 2 with some methods improved specifically for the PS5....

Agree with all of this. Apart from the PS4 having 4GB of GDDR5 being overkill. It was the opposite, people (industry professionals) were saying that 4GB would be insufficient and they'd be D.O.A. We actually saw what would have happened to the PS5, albeit to a much smaller extent with the OG Xbox One which had an insufficient RAM set up.
 

FranXico

Member
Stop trying to dismiss the idea that there could be hardware and software that make things work better than they are.
He's saying that this hardware and software you are talking about can only take one so far. No amount of features, tricks or optimizations can make the PS5 GPU as capable as the Series X GPU, or the Series X IO as fast and efficient as the PS5 IO.
 

ToadMan

Member
The problem is, PS5 clock remain a mystery, because you dont know how far PS5 will downclock. During "the road to PS5" talk Carny said he expect PS5 to stay at close to 2.2GHz most of the time, but he also contradicted himself saying PS5 GPU cant sustain 2GHz fixed clock (9.2TF), and that's why they have to use variable clock.

I think this variable strategy will work very well in games with framerate lock and fixed resolution, because in such scenario GPU is not pushed to it's limits anyway, and 2.2GHz clock is only used when needed. There are however also games with unlocked framerate and dynamic resolution. In such game PS5 GPU will have to run at 100% for extended period of time and I'm afraid we will never know how far PS5 GPU will have to downclock in such scenario.

Edit. - As I can see PS warriors have aready reported my post😃. But I'm not surprised PS certain people cant stand the truth, and especially when Cerny hismelf contradicts their believes.

Rea -
Old paradigm = fixed clock
New paradigm = variable clock


Cerny said it's impossible to sustain 2GHz clock with old paradigm (fixed clock). So based on what he said we know PS5 will not run at 2GHz clock for extended period of time, yet you want to believe PS5 GPU will have no problems sustaining even higher frequency. You guys want to believe in fairy tales for real? 2GHz and higher 2.2GHz is only possible thanks to the new paradigm (variable clock). Yes, with this strategy PS5 GPU can run 2.2GHz when game will need more GPU resources (for short period of time however). Cerny is right saying PS5 could hit even higher frequency than 2.2 GHz, because that's also the case on PC when you OC GPUs (it's easy to achieve every high frequency for short period of time, but in order to find stable clock you have to test what's the max sustained freqency).

Variable boost clock strategy simply means more performance, and nore performance is a good thing. Developers on PS5 know for sure how far PS5 GPU will downclock in the worst possible scenario and they can always optimize for this fixed clock (and everything about this clock will be a bonus for certain period of time). So it's not like variable clock is a bad thing like some people suggest and my only issue is Cerny dont want to be transparent with people about PS5 real clock.

PS fans think PS5 is using 2.2GHz because of RDNA2 architecture magic rather than variable clock, but the thing is MS is also using RDNA2 GPU, yet they cant sustain more than 1.8GHz. There's no doubt in my mind, Sony dont want to be transparent with people, because they know real sustained GPU clock in PS5 is nowhere near as impressive as 2.2GHz.

The “old paradigm” is uncapped power use with fixed clock. That is the context of Cerny’s 2Ghz comment.

The reason PS5 can sustain higher clocks is due to hardware integrated real-time power management and activity monitoring.

If a piece of code runs within the power/activity budget, the cpu and gpu will run at full clock for as long as that is desirable.
 
Last edited:

Lethal01

Member
He's saying that this hardware and software you are talking about can only take one so far. No amount of features, tricks or optimizations can make the PS5 GPU as capable as the Series X GPU, or the Series X IO as fast and efficient as the PS5 IO.

That's like looking at a 15tflop GPU and saying no custom hardware will let a 10 tflop card raytrace as well as to 15tflop one.

And then you put put in custom raytracing hardware and the 10 tflop far surpasses the the 15tflop one..
Feature,tricks and optimizations don't take you "so far" they will absolutely let you far surpass hardware with far higher "numbers".

Now if he wants to claim he knows that there is nothing in the PS5 that's big enough to make a difference fine. but to imply that no customizations could is beyond dumb.
 

geordiemp

Member
The “old paradigm” is uncapped power use with fixed clock. That is the context of Cerny’s 2Ghz comment.

The reason PS5 can sustain higher clocks is due to hardware integrated real-time power management and activity monitoring.

If a piece of code runs within the power/activity budget, the cpu and gpu will run at full clock for as long as that is desirable.

Cerny also mentioned that fixed 3 Ghz for the CPU was not that good with the old paradigm and if CPU is hit with lots of AVX 256 instructions. Thats the part many miss out.

He also said Dense complex geometry usually results in less heat than simple geometry like a map screen and specifically mentions certain map screens.

All of the above gives hints on what ps5 is doing with clocks along with the 2 % drop and 10 % power comments.
 
Last edited:

Brudda26

Member
One thing I've got from all the ps5 stuff is Sony is doing as much as they can to take all unnecessary workloads away from the GPU and CPU etc and dedicating all of that to fixed function hardware so they can make much more effiecent use of the GPU and CPU. I dont think some people realise how big a deal this is.
 
Status
Not open for further replies.
Top Bottom