• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.

FranXico

Member
One thing I've got from all the ps5 stuff is Sony is doing as much as they can to take all unnecessary workloads away from the GPU and CPU etc and dedicating all of that to fixed function hardware so they can make much more effiecent use of the GPU and CPU. I dont think some people realise how big a deal this is.
It is a big deal, indeed. Both consoles are doing that to some extent.
 

LordOfChaos

Member
He also said Dense complex geometry usually results in less heat than simple geometry like a map screen and specifically mentions certain map screens.

All of the above gives hints on what ps5 is doing with clocks along with the 2 % drop and 10 % power comments.

It's always a simple menu or a wall that sends the PS4 flying

The old paradigm optimizes for the worst 1% scenario, the maximum clock speeds it can lock at in the worst cases are the same clock speeds it locks at in the best cases. The PS5 clocks the GPU higher by looking at what the power load will be in most real use, and saying in the worst cases we'll just dip the clocks a bit lower.

It's still the weaker GPU, mind, no getting around that, but people acting like it's not an interesting idea or that it was a response to being behind for one aren't giving it enough merit, and two are underestimating the design time of such a thing. In the next refresh I could see both teams using a similar solution to go even higher, might become standard console practice eventually.
 
Last edited:

Great Hair

Banned
Found on Twitter..i don't know anything else
xcY0JqC.png

GTA $300 million budget (team of 1,000+)
Halo 6 $500 million budget (team of 400+)
Qlymax Well Known, Liked IP $BIGGESTO BUDGETO (team of who? 9+), but a game ready for 2022 - 5 years (usual time frame for a game) = 2017 (they started on their next game&engine while still employed at Ubisoft Montreal) - hyperbole much?

Kenneth Jackson
Screenwriter for the upcoming AC game (one of MANY)
He only worked at Ubisoft Montreal (2013 til 2020), before he worked at IBM as an server administrator? Has done lots of network gameplay stuff, XBOX Live programming and tune multiplayer ... guessing that he worked on For Honor, Siege?

Created an entirely new water based physics / collision detection engine for the next AC game with exacting precision for true "Next-Gen" console gaming (the true power of this engine will not be realized until the Xbox Scorpio is released, and to a lesser extent on the PS4 Pro, with a PC version currently in the works).

==================​

According to Xbox insider Rand al Thor, who spoke in the latest Xbox Two podcast episode, Sony's blockbuster God of War is another PlayStation 4 exclusive that will be coming to PC this August. (a youtuber, influencer = insider?)
 

Lethal01

Member
Well, he knows this to be the case, yes. The best PS5 graphics will not be as good as the best XsX graphics. It is what it is.

How would he know? I'm doubtful but genuinely asking, could you remind me who matt is? I'm guessing he's atleast working for sony? Or a developer with confirmed access to the dev kits?
 
Last edited:

ToadMan

Member
Ok🤷🏾‍♂️
Either way it's a hybrid. Not necessarily a bad thing but not full RDNA2. Sony may not have needed other features or simply added a few RDNA features later in development. I don't know you don't know, but what we do know is that it's a hybrid

Sony describe their PS5 GPU thusly : Custom RDNA 2.

MS describe their Xsex gpu thusly : Custom RDNA 2.


If you wish to call Sony’s gpu a hybrid, then with what known information has been released the same can be said of the Xsex.

If you wish to say the PS5 is not “full rdna 2” (a meaningless statement, but you used it) then the Xsex is equally not “full rdna 2” either.
 

Thirty7ven

Banned
That's like looking at a 15tflop GPU and saying no custom hardware will let a 10 tflop card raytrace as well as to 15tflop one.

And then you put put in custom raytracing hardware and the 10 tflop far surpasses the the 15tflop one..
Feature,tricks and optimizations don't take you "so far" they will absolutely let you far surpass hardware with far higher "numbers".

Now if he wants to claim he knows that there is nothing in the PS5 that's big enough to make a difference fine. but to imply that no customizations could is beyond dumb.

All true. But the point is he is someone with access to both, he sees them in action every day, he knows the hardware. He isn’t speculating, we are.
 

MarkMe2525

Banned
Sorry if what I wrote can be confusing, by no means the clock speed in the xsx is slow.

What I wanted to say is that it cannot be faster because the heat produced for higher clocked speeds at a fixed clock strategy is too much for our current cooling mechanism.

Again, I just pointed at the cost because we are no simply comparing 52 Cu and 36 Cu, these CUs are different and that difference is worth to point out.
I can see the point now. I also agree because if Microsoft could handle the heat of running it faster they undoubtedly would.
The question on the XSX side is not the clock, but which % of utilization they can leverage of 50+ CUs. Like Cerny stated, it's much easier to have high utilization of a narrower, but faster GPU. More rays produced if you're into raytracing, that should be a benefit of the XSX. Fewer calculations once the rays are bouncing across surfaces when compares to the PS5 due to slower clocks. I mean... it's hard to say how visible the difference will be, at least to me.
I can see the hypothetical situation but in practice has this been known to be an issue with other gpus with similar cu counts? I believe the 2070 runs a lower base clock, is utilization hampered on that card? I genuinely don't know as I'm not a PC gamer.
 

geordiemp

Member
All true. But the point is he is someone with access to both, he sees them in action every day, he knows the hardware. He isn’t speculating, we are.

Of course the ps5 cannot get from 10 to 12 TF, thats impossible the floating point operations per second is defined.

Will Ps5 be 15 % less performant or is TF not everything that applies to performance - we will wait and see.

The system was ready to go but delayed by at least a year to add features to compete with the XsX.

What is your source besides Discord and timdog ?

If you want to say I believe (but with zero evidence) then thats your belief, otherwise its just plain FUD.
 
Last edited:

Thirty7ven

Banned
Of course the ps5 cannot get from 10 to 12 TF, thats impossible the floating point operations per second is defined.

Will Ps5 be 15 % less performant or is TF not everything that applies to performance - we will wait and see.

I think it’s easy to accept that anything relating to shader performance will perform exactly as the numbers suggest.
 

splattered

Member
Of course the ps5 cannot get from 10 to 12 TF, thats impossible the floating point operations per second is defined.

Will Ps5 be 15 % less performant or is TF not everything that applies to performance - we will wait and see.



What is your source besides Discord and timdog ?

If you want to say I believe (but with zero evidence) then thats your belief, otherwise its just plain FUD.

Ha no it's 100% FUD i'm just being a smart ass... i have no idea how things played out :)

I'm still excited for both consoles but going XsX first.
 

thelastword

Banned
Agree with all of this. Apart from the PS4 having 4GB of GDDR5 being overkill. It was the opposite, people (industry professionals) were saying that 4GB would be insufficient and they'd be D.O.A. We actually saw what would have happened to the PS5, albeit to a much smaller extent with the OG Xbox One which had an insufficient RAM set up.
I'm talking about posters here, in the next gen thread/s prior to 2013.....It has been linked many times before by O onQ123 .....Lots of gems in those old Next gen threads.....
 

jose4gg

Member
Of course it's not a 10 minute thing, it's a year long delay as we saw from the early github leaks prior to the eventual prolonged reveal of the system.

The system was ready to go but delayed by at least a year to add features to compete with the XsX.

¿Delayed? ¿Do you know when they planned to release the specs?

These systems require years to build, building hardware is completely difficult because issues on hardware cannot be corrected with patches like software, they needed months to add the "features" they needed and test they could work.

You are throwing theories again without no info to back up your claim.

#Fud
 

geordiemp

Member
I think it’s easy to accept that anything relating to shader performance will perform exactly as the numbers suggest.

Yes we expect the shading perfiormance to be that.

And there is more to performance over a frame than just shading.

Cerny already listed the GPU speed benefits, rasterisation, cache and internal registers. Thats why game Benchmarks dont use TF.

Will XSX render a few more reflections in puffles, probably, will Ps5 load and stream better - probably. What else ?

Will anyone notice, probably not, although I am sure DF will live on quality of puddle relections for next 5 years.
 
Last edited:

splattered

Member
¿Delayed? ¿Do you know when they planned to release the specs?

These systems require years to build, building hardware is completely difficult because issues on hardware cannot be corrected with patches like software, they needed months to add the "features" they needed and test they could work.

You are throwing theories again without no info to back up your claim.

#Fud

Yup
 

geordiemp

Member
The current discussion is a bad one. There are to many people that do not have truth as their interest - they are pushing details to push agendas instead of real curiosity around what the two platforms look like.

Before we even go further, I would implore the participants in the discussion to agree what they mean with RDNA2. I doubt the majority of the participants have a good idea about that.

For me RDNA2 means the upgraded TMSC fab with slight modifications to the core design of the CUs that allow a significant IPC gain, frequency increase and power-performance gain over RDNA1. All other things are optional features that you can pick and choose from depending on your design vision for the console.

Yup, TSMC will never say how they do it, they have likely used EUV for some critical parts of the litho for the finfet gates is my guess and might have some nice improved high K material for the gate or other fancy ALD process.

So if we think of RDNA2 as a node, both consoles will have had all the logic, every transistor, scaled and drawn to this process. So they are all RDNA2 process, every transistor.

After that, you can argue what is RDNA2 LOGIC blocks, both will have chosen from a custom menu of logic and functions that were important, and dropped ones that dont fit their vision at the time but both would know exactly was coming and when as its a slow process.

Maybe Cerny saw the direcion of UE5 and small triangles and no lods, traingle per pixel, and did not care for VRS or anything to enhance LOD handling as it would not apply to Nantite direction of small traingles per pixel. All we know is Cerny went crazy with I/O... Latency and cache scrfubbers / Coherency and removing DMA bottlenecks. Cerny also mentions speed of displaying small triangles a few times in road to ps5,,,,,,mmmm

Its almost as if Cerny was designing hardware for Nanite (just my humour thought, I do not know just a fun speculation)....We dont know.
 
Last edited:
I can see the point now. I also agree because if Microsoft could handle the heat of running it faster they undoubtedly would.

I can see the hypothetical situation but in practice has this been known to be an issue with other gpus with similar cu counts? I believe the 2070 runs a lower base clock, is utilization hampered on that card? I genuinely don't know as I'm not a PC gamer.

I didn't buy an RTX, still rocking a 1080 in both my laptop and desktop, so all I'm about to say is based on known info I can google. The 2070 has slightly lower clock speed when compared to the 2080, in apples to apples comparison (not mixing OC versions).
So we're looking at same architecture, slight clock advantage for the 2080 and also increased number of CUDA cores, tensor cores, etc. All in all we look at avg 5-10 fps advantage (depending on the game) for the 2080 at 1440p.

Now, when we increase the clock speed on a 2070 (overclock it) above the 2080, what does the performance look like?

benchmarks

Pasting their conclusion:

"When we overclocked the ASUS ROG STRIX RTX 2070 SUPER O8G GAMING is where the real fun began. At 2.1GHz GPU and 15.9GHz memory the ASUS ROG STRIX RTX 2070 SUPER O8G GAMING outperformed the GeForce RTX 2080 FE from 3-6% consistently in every game. It did outperform the RTX 2080 FE, absolutely confirmed! "



As you can see, increasing the clock speed actually gets a gpu with fewer CUDA cores RT cores and tensor cores to actually edge out the 2080.

BTW, the 2080 used in the comparison is boost clocked at 1.8 GHz, so actually higher than the non FE 2080s.

Clock speeds matter :)
 

MarkMe2525

Banned
I didn't buy an RTX, still rocking a 1080 in both my laptop and desktop, so all I'm about to say is based on known info I can google. The 2070 has slightly lower clock speed when compared to the 2080, in apples to apples comparison (not mixing OC versions).
So we're looking at same architecture, slight clock advantage for the 2080 and also increased number of CUDA cores, tensor cores, etc. All in all we look at avg 5-10 fps advantage (depending on the game) for the 2080 at 1440p.

Now, when we increase the clock speed on a 2070 (overclock it) above the 2080, what does the performance look like?

benchmarks

Pasting their conclusion:

"When we overclocked the ASUS ROG STRIX RTX 2070 SUPER O8G GAMING is where the real fun began. At 2.1GHz GPU and 15.9GHz memory the ASUS ROG STRIX RTX 2070 SUPER O8G GAMING outperformed the GeForce RTX 2080 FE from 3-6% consistently in every game. It did outperform the RTX 2080 FE, absolutely confirmed! "


As you can see, increasing the clock speed actually gets a gpu with fewer CUDA cores RT cores and tensor cores to actually edge out the 2080.

BTW, the 2080 used in the comparison is boost clocked at 1.8 GHz, so actually higher than the non FE 2080s.

Clock speeds matter :)
That's what I was looking for. It seems as if the 2070 super is using the same die as the 2080 but still has 380 less cuda cores (2560 vs 2944). I really don't know how cuda cores compare to cu's to extrapolate further into the comparison.
 
That's what I was looking for. It seems as if the 2070 super is using the same die as the 2080 but still has 380 less cuda cores (2560 vs 2944). I really don't know how cuda cores compare to cu's to extrapolate further into the comparison.

The clocks definitely matter, though in that example the 2070 is boosted above the 2080 in TF performance too. Makes the read a little harder. Probably an experiment that would work best if the gap in cores was larger.
 

Dolomite

Member
Mesh shaders and primitive shaders are the same thing. AMD call them primitive shaders, AMD will still call them primitive shaders on RDNA2 because that's the term they use. Sony obviously have done some customisation in that area to as it's called geometry engine by them. Different companies use different terminology.

PS5 also uses some form of ML considering EA has already mentioned it. It might not be RDNA derived and GPU based. Sony interactive entertainment(the playstation division)have patented alot of machine learning stuff.
Prim shaders and Mesh are not the same brother. Period. They execute similar task, and one is the streamlined successor to the other, but not the same. Mesh achieves what prim does in less steps arguably much better:

Prim shaders add a stage in between the Hull Shader and fixed function tessellation in the legacy pipeline

Mesh shaders scrap almost the entire pipeline before rasterization and pixel shaders for MUCH faster, more parallel and more programmable/flexible geometry processing
Mesh shading basically lets you use a compute shader (basically a type of GPU program that lets you use the GPU for acceleration of general purpose math operations) to determine vertex inputs in parallel rather than the old sequential Input Assembler
It also enables fully programmable tessellation
 

SleepDoctor

Banned
  1. An error message that appears if you try to add more than one PS5 console to your cart:
    "You
    can only purchase one version of the PS5™ Console: Disc or Digital. You have already added one PS5™ console to your
    cart."

Please be true. Please be true. Please be true. Please be true.


That would be great. Scalpers get fucked 🖕
 
  1. An error message that appears if you try to add more than one PS5 console to your cart:
    "You
    can only purchase one version of the PS5™ Console: Disc or Digital. You have already added one PS5™ console to your
    cart."

Please be true. Please be true. Please be true. Please be true.

And with Sony upping production this should ensure that many people will get the system at MSRP at the most.
 

Shpeshal Nick

aka Collingwood
So, Lisa Su and Cerny are not liars, but laymen patronizers?

Now I've just seen everything here.

So you tell me then genius.

Let’s assume hypothetically it’s true (I never actually said it was). How do you think they would/should label it in official documentation/marketing materials?

RDNA1?

RDNA 1.5?

RDNA1 with RDNA2 features?
 

geordiemp

Member
Prim shaders and Mesh are not the same brother. Period. They execute similar task, and one is the streamlined successor to the other, but not the same. Mesh achieves what prim does in less steps arguably much better:

Prim shaders add a stage in between the Hull Shader and fixed function tessellation in the legacy pipeline

Mesh shaders scrap almost the entire pipeline before rasterization and pixel shaders for MUCH faster, more parallel and more programmable/flexible geometry processing
Mesh shading basically lets you use a compute shader (basically a type of GPU program that lets you use the GPU for acceleration of general purpose math operations) to determine vertex inputs in parallel rather than the old sequential Input Assembler
It also enables fully programmable tessellation

Ps5 new Geometry engine can also scrap (cull) the entire pipeline before rasterisation as stated in road to Ps5.

Go listen to road to ps5. Sony need to give new names to things lol.

Also Matt Hargatt



Does that read like old or new ? Answers on a postcard.
 
Last edited:
So what happened with this Sony engineer Rosario Leonardi ??

I see people wanting to report something to Mod Of War.

What happened here? Genuinely curious.

They are keeping the thread locked until there's proof that everything that was said was real. The issue that they are having is that apparently some of the tweets were deleted but there's no proof that they have been deleted. That's what they are waiting on.
 

Doncabesa

Member
They are keeping the thread locked until there's proof that everything that was said was real. The issue that they are having is that apparently some of the tweets were deleted but there's no proof that they have been deleted. That's what they are waiting on.
Internet wayback machine shows this post was real and deleted.

 
Last edited:

Brudda26

Member
Prim shaders and Mesh are not the same brother. Period. They execute similar task, and one is the streamlined successor to the other, but not the same. Mesh achieves what prim does in less steps arguably much better:

Prim shaders add a stage in between the Hull Shader and fixed function tessellation in the legacy pipeline

Mesh shaders scrap almost the entire pipeline before rasterization and pixel shaders for MUCH faster, more parallel and more programmable/flexible geometry processing
Mesh shading basically lets you use a compute shader (basically a type of GPU program that lets you use the GPU for acceleration of general purpose math operations) to determine vertex inputs in parallel rather than the old sequential Input Assembler
It also enables fully programmable tessellation

Yeah it was more of a generalization, they both go about producing the same results in a "different" way.
I probably should of been more clear when it came to prim shaders on PS5 in general. The geometry engine on PS5 does what you put above for mesh shaders as someone above pointed out. So sony have obviously made some changes in that department.
 

geordiemp

Member
Yeah it was more of a generalization, they both go about producing the same results in a "different" way.
I probably should of been more clear when it came to prim shaders on PS5 in general. The geometry engine on PS5 does what you put above for mesh shaders as someone above pointed out. So sony have obviously made some changes in that department.

Sony forgot to change the name....
 

Lethal01

Member
All true. But the point is he is someone with access to both, he sees them in action every day, he knows the hardware. He isn’t speculating, we are.

Who is he?

Either way fair enough, but i don't think a single developers word is enough to try to shut down speculation on evidence pointing to otherwise, especially when there are other devs saying otherwise.

It's a deifferent story if he's a lead engine developer at activision or something.
 
Is it just me or does 90% of this thread consist of people:
A) Saying the XSX is more powerful than the PS5 (Which is, ynow, a fact)
B) Saying the power difference doesnt matter or trying to say the difference isn't that big
C) Saying the PS5's SSD is faster than the XSX's SSD(Which is again, a fact)
D) Saying that the SSD speeds are not that important
 

Thirty7ven

Banned
Who is he?

Either way fair enough, but i don't think a single developers word is enough to try to shut down speculation on evidence pointing to otherwise, especially when there are other devs saying otherwise.

It's a deifferent story if he's a lead engine developer at activision or something.

Effectively the only insider who has gotten basically everything right. He works at a third party development studio and considering he had early access to both devkits, probably one of the major ones.

You don’t have other developers saying something different. What we have are indie nobodies with no access to devkits talking gibberish.
 
Looks like some preparation stuff for pre-orders opening soon. I still don't expect them to drop it out of the blue like folks thought a while back.

Possibility they could react to Microsoft fairly quickly, but I'm guessing they'll go the first week of August.
 

sircaw

Banned
Is it just me or does 90% of this thread consist of people:
A) Saying the XSX is more powerful than the PS5 (Which is, ynow, a fact)
B) Saying the power difference doesnt matter or trying to say the difference isn't that big
C) Saying the PS5's SSD is faster than the XSX's SSD(Which is again, a fact)
D) Saying that the SSD speeds are not that important

There are some good genuine tech people trying to have an honest conversation in this thread.

I am not one of these. :messenger_grinning:
 
Last edited:
Status
Not open for further replies.
Top Bottom