• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Rumor: PS5 Pro Codenamed "Trinity" targeting Late 2024 (some alleged specs leaked)

Would you upgrade from your current PS5?

  • For sure

    Votes: 377 41.0%
  • Probably

    Votes: 131 14.2%
  • Maybe

    Votes: 127 13.8%
  • Unlikely

    Votes: 140 15.2%
  • Not a chance

    Votes: 145 15.8%

  • Total voters
    920

Xyphie

Member
Will this thing have big L3 cache for the cpu? 3D-V cache? Infinity cache for the gpu?

Don't expect more than 16MB L3 total so V-cache definitely won't happen. L3 cache for the GPU is a maybe/probably, it seems most likely they'd just use the 4x MCD configuration from Navi 32 which this GPU will lift basically everything from.
 
Last edited:

Tqaulity

Member

These are close to the core rumors leaks but seem a bit optimistic. 20 GB of RAM and 24 TFLOPs? Bring it on! This would be awesome and on the upper end of what Sony could do for a reasonable price by next year.

Only? That is higher than I expect for 54 CU's (won't be 60 usable for obvious reasons).
Ok yes it's silly to speculate over speculation I get it. But IF these specs turned out to be true, let's think about how you could get to 24TFLOPs with only a 54 CU part. The rumored 7700XT with 54 CUs clocked at 2.6Ghz boost is only ~18 TFLOPs. That's already a pretty heft clock increase over the base PS5. How do they get another 33% TFLOP increase with that config? Possibly an "astronomical" clock speed (i.e >3Ghz)? Maybe the bits from RDNA 4 provides some IPC gains not seen with RDNA 3? Or maybe they use a full 60 CU part (even then, would still need a ~3Ghz clock to hit 24TFLOPs). Makes you wonder doesn't it :pie_thinking:

At the end of the day, I would love 24TFLOPs however they get there. Make it happen Sony!
 

Mr.Phoenix

Member
These are close to the core rumors leaks but seem a bit optimistic. 20 GB of RAM and 24 TFLOPs? Bring it on! This would be awesome and on the upper end of what Sony could do for a reasonable price by next year.


Ok yes it's silly to speculate over speculation I get it. But IF these specs turned out to be true, let's think about how you could get to 24TFLOPs with only a 54 CU part. The rumored 7700XT with 54 CUs clocked at 2.6Ghz boost is only ~18 TFLOPs. That's already a pretty heft clock increase over the base PS5. How do they get another 33% TFLOP increase with that config? Possibly an "astronomical" clock speed (i.e >3Ghz)? Maybe the bits from RDNA 4 provides some IPC gains not seen with RDNA 3? Or maybe they use a full 60 CU part (even then, would still need a ~3Ghz clock to hit 24TFLOPs). Makes you wonder doesn't it :pie_thinking:

At the end of the day, I would love 24TFLOPs however they get there. Make it happen Sony!
Don't read into that guy... that shit reeks of BS to me. Seem like he is just trying to jump on the leaks bandwagon for attention.

60CU means 54CU active. Realistically speaking. Getting 54CU to 24TF? That's a 3.5Gh clock. Nope... BS. And even if it was somehow the full 60CU being used (which makes no sense but hey, we're speculating right?) that's still a 3.1Ghz clock. Nope... BS again.

And that RAM. That shit makes zero sense. And no one EVER talks about how much RAM is being reserved for the OS. And why would the OS RAm suddenly increase? Is he saying that the 4GB is a separate pool of RAM? O, then how does it have 448GB/s of bandwidth? All these things are just not possible.
 

HeisenbergFX4

Gold Member
24 TF at 60 CUs???
No Way Thinking GIF
 

StereoVsn

Gold Member
Would be interesting. I know myself and mates are 46 - 47 and my wife joined the party in the last few years at 45 so although we are not the majority I do know that a lot will be in between 35 and 50 and will have a fair amount of disposable income to throw at a pro. I know myself and good friend will pre-order day 1 and I imagine there are a lot like us.
Oh damn, hello fellow 40s crew! And yeah, if this thing is not going to go ballistic with the disk spinning up every 20 min I am pre-ordering.

Although to be fair my PS4 Pro sounded like jet taking off. Couldn't sell it fast enough to get a PS5, lol.
 

StereoVsn

Gold Member
I keep reading this, but it seems a tad unrealistic. If there's one relatively unimportant component where they could cut costs to hit their target, this is it. I also image that those upgrading to a pro will most likely be the "hardcore" users which are even more likely to have expanded their storage with a third party SSD they will just carry over. Just seems like a big waste of money that could have gone to another component or be used to reduce the retail price.
Yeah, considering you can get a good PCIE Gen 4 4TB for line $250-270, no point of going over 1TB built in. Save on coats toward other components.
 

ripjoel

Member
Samsung now has better 4nm yields. What's the latest report on TSMC/Samsung 4nm pricing?
 
Last edited:

Bry0

Member
Refreshing as well, there's around a 20% increase in power efficiency over N5, I think the Pro will be around 350-400w total system power consumption.
That would be either a big or loud console. I would expect active power use in gaming to be like 250w maybe 300w absolute max.
 

Mr.Phoenix

Member
Agree, I think average power consumption will be around that ball park, PS5 has a total power draw of 350w but on operation it's around 200w and fluctuates.
Yup.. its no mystery actually. We already have detailed specs of the GPU this is going to be based on.

That GPU is already rated to be a 200W GPU. On 5nm. And if its on 4nm? Then it wud be drawing even less power than that. And has a game and boost clock of 2400 and 2600 respectively. PS5pro can sit in the middle, so... 2500Mhz? And will obviously not have the massive 64MB infinity cache that GPU has. You throw in the CPU, I can see the PS5pro APU pulling no more than 230w under load. The system would still need a 300w+ PSU though.
 

Dream-Knife

Banned
Yup.. its no mystery actually. We already have detailed specs of the GPU this is going to be based on.

That GPU is already rated to be a 200W GPU. On 5nm. And if its on 4nm? Then it wud be drawing even less power than that. And has a game and boost clock of 2400 and 2600 respectively. PS5pro can sit in the middle, so... 2500Mhz? And will obviously not have the massive 64MB infinity cache that GPU has. You throw in the CPU, I can see the PS5pro APU pulling no more than 230w under load. The system would still need a 300w+ PSU though.
Can you really compare an APU to a dGPU?

PS5 has the same "hardware" as the 6700, but has the performance of a 6600xt due to lower clocks.
 

Mr.Phoenix

Member
Can you really compare an APU to a dGPU?

PS5 has the same "hardware" as the 6700, but has the performance of a 6600xt due to lower clocks.
Yes, we can compare it. Its not going to be identical... but that is the best GPU to use to compare or extrapolate on what the PS5pro could be. Especially if you know what the architectural carryovers are. And if we end up being off on just clocks... thats still pretty damn close.

7700XTPS5pro
Fab Node5nm4nm/5nm
Compute Units5454
Clock (game/boost)2400Mhz/2600Mhz2500Mhz
FP32 (RDNA3 32)17.9TF (35.9TF)17.2TF (34.5TF)
Cache (l2/l3)2MB/64MB6MB/ -
TDP200W200W+ (CPU + GPU)
Mem Bus192bit256bit
 

ChiefDada

Member
Returnal runs on high 1440p 60 on a 6600xt. Ps5 it runs at medium 1080p 60.

Returnal stands alone as the weirdest PS5 to PC port I've ever seen. At 1440p the bandwidth starved 6600xt with 8gb buffer should not be running faster than PS5. Simply put the game is not well optimized for PS5. Housemarque should have been patched it. It especially stings because it's my favorite game on PS5.
 
I just laugh that anyone would question Mark Cerny at this point. PS4, PS4 Pro and PS5 all have punched above their weight.

Look at the gaming experiences the PS4/PS4 Pro offered even with the crap tier CPU. Mark Cerny clearly knows how to design hardware and does a great job engineering the architecture to limit certain bottlenecks.

Go build a PC with a similar setup of the PS4/PS4 Pro and see how it handles the same games.

PS5 Pro will take a lot of those games with unlocked 40fps modes above the 48fps VRR threshold making for a much better experience.
 
Last edited:

Brigandier

Member
First off...24TF is not only...

second, that tweet doesn't make sense to me. I can't think of a RAM config that gets you those numbers.

Honestly if it is to be 24tf I'd be extremely happy lol...

I'm a bit out the loop with GPU CPU tech these days but is the latest RDNA potentially being used in PS5Pro? How does that double it's TF output? I know I could Google it but what would be the point of discussion!!!

You think Pro will use a newer CPU???
 
Yes, we can compare it. Its not going to be identical... but that is the best GPU to use to compare or extrapolate on what the PS5pro could be. Especially if you know what the architectural carryovers are. And if we end up being off on just clocks... thats still pretty damn close.

7700XTPS5pro
Fab Node5nm4nm/5nm
Compute Units5454
Clock (game/boost)2400Mhz/2600Mhz2500Mhz
FP32 (RDNA3 32)17.9TF (35.9TF)17.2TF (34.5TF)
Cache (l2/l3)2MB/64MB6MB/ -
TDP200W200W+ (CPU + GPU)
Mem Bus192bit256bit

17.2 is pretty close to my expectation of 18Tflops, so I guess I'd only be mildly dissapointed if they couldn't reach 18.
 

Mr.Phoenix

Member
Honestly if it is to be 24tf I'd be extremely happy lol...

I'm a bit out the loop with GPU CPU tech these days but is the latest RDNA potentially being used in PS5Pro? How does that double it's TF output? I know I could Google it but what would be the point of discussion!!!
Yes. That is about the only thing to be certain of. At worst, PS5pro will be using RDNA3, which is already out now, so a safer bet will be to say it will use RDNA3+. Basically something in between 3 and 4.

Since you say you are out of the loop, I wi give a simple explanation of TF doubling in RDNA3. In RDNA3, each compute unit can process two instructions instead of one. But only if the GPU driver can sort out which instructions best fit into this get-two-per-clock thing. That is the most basic way I can explain it.

A slightly more detailed (and complex way), each CU has 64 shaders aka. Alu`s. The SIMD units that cater to those 64 shaders have the ability to handle two instructions simultaneously. The problem is that its still based on the og CU foundation. So you have hardware that can handle two instructions per clock now but with the underpinnings of something that was designed to handle one per clock. This means that at best, in real-world performance, you will never get 2x the TF number with RDNA3. Maybe you et like 60-70% more, but not 100% more. Right now, however, you would be hard-pressed o even see ike 20-30% improvements, that is because the ability to dual issue is currently very driver dependent. The SIMD has to basically be good at finding instances that would favor dual issue utilization. And AMD is not known for its great drivers.

But it's something that would ikey increase over time, or even better, something that would be easier to implement in a console.
You think Pro will use a newer CPU???
Not sure about this one. I am sure we all would love it to use Zen 4 and all that. But in truth, I personally see them doing no more than increasing the CPU clock and increasing the CPU cache. Its lack of cache is the reason its underperforming. If they were to throw in Zen 4 cores with its current cache structure,it would be just as bad as it is now.
 

Brigandier

Member
Yes. That is about the only thing to be certain of. At worst, PS5pro will be using RDNA3, which is already out now, so a safer bet will be to say it will use RDNA3+. Basically something in between 3 and 4.

Since you say you are out of the loop, I wi give a simple explanation of TF doubling in RDNA3. In RDNA3, each compute unit can process two instructions instead of one. But only if the GPU driver can sort out which instructions best fit into this get-two-per-clock thing. That is the most basic way I can explain it.

A slightly more detailed (and complex way), each CU has 64 shaders aka. Alu`s. The SIMD units that cater to those 64 shaders have the ability to handle two instructions simultaneously. The problem is that its still based on the og CU foundation. So you have hardware that can handle two instructions per clock now but with the underpinnings of something that was designed to handle one per clock. This means that at best, in real-world performance, you will never get 2x the TF number with RDNA3. Maybe you et like 60-70% more, but not 100% more. Right now, however, you would be hard-pressed o even see ike 20-30% improvements, that is because the ability to dual issue is currently very driver dependent. The SIMD has to basically be good at finding instances that would favor dual issue utilization. And AMD is not known for its great drivers.

But it's something that would ikey increase over time, or even better, something that would be easier to implement in a console.

Not sure about this one. I am sure we all would love it to use Zen 4 and all that. But in truth, I personally see them doing no more than increasing the CPU clock and increasing the CPU cache. Its lack of cache is the reason its underperforming. If they were to throw in Zen 4 cores with its current cache structure,it would be just as bad as it is now.

Thanks for that I understood you clearly 👍 sounds like it has huge potential if Cerny and other engineers nail it which I think they will.

Do you think the Zen 2 CPU with a bigger cache / clock speed will be competitive even in late 2024?

Last but not least what is Cernys secret sauce with the RT patent should we be expecting huge improvements to RT capability or not to get carried away?.

Sounds like the PS5 Pro has big potential to be a worthy mid gen refresh!!!
 

SABRE220

Member
I'd just like to point out that your comment about Sony first parties having little ambition to push tech advancement is a little bit disingenuous given that Sony has yet to release but a small handful of new "built for PS5" games to date. We really don't know how ambitious they are and what those games will look like. All we've seen so far are Demon's Souls (arguably the best looking next gen game to date), Ratchet & Clank (arguably the best looking next gen game to date along with best utilization of Fast I/O to date), AstroBot (best Dualsense utilization yet), and Returnal (technically not 1st party when it was developed it but still one of the best uses of Dualsense, Tempest 3D audio, and SSD in a game to date). Remember both Spiderman games were PS4 ports, Horizon FW was cross gen, God of War R was cross gen, GT7 was cross gen, Naughty Dog has only released remakes and remastered so far, Sucker Punch has only done an update to GOT, and MLB of course is still cross gen.

I mean, if you're judging upcoming games based on their trailers, I wouldn't as many of the games from the PS Showcase this year are still far out. But in terms of what has actually released, all 4 are some of the most ambitious "next gen" games to date.
I think it should have been pretty self explanatory that I was judging sony first parties efforts so far this gen, in the future they might drop some gems but looking at spiderman2 yeah lets keep our expectations grounded. Compared to prior gens thir output has been underwhelming to say the least, we are half way into next gen and so far there has been nada in terms of next gen showcases really. Everyone is more than happy to stay safe and stagnant in the cross gen bubble slightly upgrading their lastgen pipelines and calling it a day. Ratchet is nice and great in terms of io usage but in terms of visuals its a good starting point but not like the technical toure de forces we had come to expect in the halfway points of prior gens.

We had killzone2, uncharted 2, gow3 etc and so many jawdroppers on ps3 by that point and even on the ps4 which was a relatively conservative upgrade we had a easily distinguashable next gen output within a few years of the gen meanwhile look at the shit we had so far on the ps5.
 
Last edited:
Yes. That is about the only thing to be certain of. At worst, PS5pro will be using RDNA3, which is already out now, so a safer bet will be to say it will use RDNA3+. Basically something in between 3 and 4.

Since you say you are out of the loop, I wi give a simple explanation of TF doubling in RDNA3. In RDNA3, each compute unit can process two instructions instead of one. But only if the GPU driver can sort out which instructions best fit into this get-two-per-clock thing. That is the most basic way I can explain it.

A slightly more detailed (and complex way), each CU has 64 shaders aka. Alu`s. The SIMD units that cater to those 64 shaders have the ability to handle two instructions simultaneously. The problem is that its still based on the og CU foundation. So you have hardware that can handle two instructions per clock now but with the underpinnings of something that was designed to handle one per clock. This means that at best, in real-world performance, you will never get 2x the TF number with RDNA3. Maybe you et like 60-70% more, but not 100% more. Right now, however, you would be hard-pressed o even see ike 20-30% improvements, that is because the ability to dual issue is currently very driver dependent. The SIMD has to basically be good at finding instances that would favor dual issue utilization. And AMD is not known for its great drivers.

But it's something that would ikey increase over time, or even better, something that would be easier to implement in a console.

Not sure about this one. I am sure we all would love it to use Zen 4 and all that. But in truth, I personally see them doing no more than increasing the CPU clock and increasing the CPU cache. Its lack of cache is the reason its underperforming. If they were to throw in Zen 4 cores with its current cache structure,it would be just as bad as it is now.

You're saying the dual-issue compute system is driver dependent? this would mean it requires no input from developers/programmers.

I'd be curious to see what kind of implications this would have on console, as the hardware is fixed as well as the drivers.
 

Mr.Phoenix

Member
Thanks for that I understood you clearly 👍 sounds like it has huge potential if Cerny and other engineers nail it which I think they will.

Do you think the Zen 2 CPU with a bigger cache / clock speed will be competitive even in late 2024?
Yup. It woudbe more than adequte. Especially when you consider that all these consoles are expected to do is peak at 120fps when possible.

To put it into perspective, the PC equivalent of the PS5 CPU is the Ryzen 7 3700x. That CPU paired with a 2080ti can run Fortnite in 1080p at over 210fps. Metro [email protected]. And the only thing that is different from that CPU to that in the PS5, is the PS5s is clocked lower, and has only like I think 8MB of cache vs the 32MB in PC version of the chip.
Last but not least what is Cernys secret sauce with the RT patent should we be expecting huge improvements to RT capability or not to get carried away?.

Sounds like the PS5 Pro has big potential to be a worthy mid gen refresh!!!
No, you can get carried away. It however is not that groundbreaking, which is a good thing because it means it's more realistic.

Cerny's patent basically would make an AMD GPU handle RT in the exact same way Nvidia and IntelGPUs handle RT. Currently, AMD GPUs use shaders (CUs) to handle BVH traversal. Whereas, Nvidia and ntelGPUs have BVH traversal handled by the RT cores too. That is why AMD GPUs are crap at RT. This patent basically catches AMD up. Some context of how excited you should at least be?

Take Hogwarts. An extreme case, but shows how messed up it is.

No RT. 1440p.
7900XTX = 112fps
4080 = 109fps
3070 = 54fps
Intel A770 = 42fps


RT Ultra. 1440p
4080 = 52fps
3070 = 24fps
Intel a770 = 20.5fps
7900xtx = 15fps

Yup, this is why we need BVH acceleration. RT at ultra cuts the performance of every other GPU by nearly exactly half... but it completely decimates that of the AMD GPU. Dropping performance by over 80%.

You're saying the dual-issue compute system is driver dependent? this would mean it requires no input from developers/programmers.

I'd be curious to see what kind of implications this would have on console, as the hardware is fixed as well as the drivers.
Yes. It's driver dependent. Devs could obviously help by using specific kinda code, which makes it easier for the driver to identify what would work with dual issue and what wouldn't. But it's primarily driver dependent. And yes, that's great for console development. Would be easier to implement in consoles than say in PCs.

As it stands, RDNA3 dual issue requires the compiler to find instances where this thing works. And how well the compiler can do that is driver based. But even if the compiler is working at its peak, and the drivers are perfect, unless certain things are changed in RDNA3+ and above, dual issue compute is never going to really give you 100% compute improvement. But more like 50-70%..... at times.... if you are lucky... maybe.
 
Last edited:

Loxus

Member
Yup. It woudbe more than adequte. Especially when you consider that all these consoles are expected to do is peak at 120fps when possible.

To put it into perspective, the PC equivalent of the PS5 CPU is the Ryzen 7 3700x. That CPU paired with a 2080ti can run Fortnite in 1080p at over 210fps. Metro [email protected]. And the only thing that is different from that CPU to that in the PS5, is the PS5s is clocked lower, and has only like I think 8MB of cache vs the 32MB in PC version of the chip.

No, you can get carried away. It however is not that groundbreaking, which is a good thing because it means it's more realistic.

Cerny's patent basically would make an AMD GPU handle RT in the exact same way Nvidia and IntelGPUs handle RT. Currently, AMD GPUs use shaders (CUs) to handle BVH traversal. Whereas, Nvidia and ntelGPUs have BVH traversal handled by the RT cores too. That is why AMD GPUs are crap at RT. This patent basically catches AMD up. Some context of how excited you should at least be?

Take Hogwarts. An extreme case, but shows how messed up it is.

No RT. 1440p.
7900XTX = 112fps
4080 = 109fps
3070 = 54fps
Intel A770 = 42fps


RT Ultra. 1440p
4080 = 52fps
3070 = 24fps
Intel a770 = 20.5fps
7900xtx = 15fps

Yup, this is why we need BVH acceleration. RT at ultra cuts the performance of every other GPU by nearly exactly half... but it completely decimates that of the AMD GPU. Dropping performance by over 80%.


Yes. It's driver dependent. Devs could obviously help by using specific kinda code, which makes it easier for the driver to identify what would work with dual issue and what wouldn't. But it's primarily driver dependent. And yes, that's great for console development. Would be easier to implement in consoles than say in PCs.

As it stands, RDNA3 dual issue requires the compiler to find instances where this thing works. And how well the compiler can do that is driver based. But even if the compiler is working at its peak, and the drivers are perfect, unless certain things are changed in RDNA3+ and above, dual issue compute is never going to really give you 100% compute improvement. But more like 50-70%..... at times.... if you are lucky... maybe.
A more extreme case, path tracing.
AMD RT performance is not bad at all, they're just one gen behind Nvidia.
4080 - 87 fps
3080TI - 55 fps
3080 - 51 fps
7900XTX - 50 fps
c3OkxD8.jpg




PS5 Pro should be using RDNA4 RT. So, that Sony RT patent is more likely to be a API/software patent to utilize RDNA4 RT, since Sony doesn't used DirectX Ray Tracing.

When it come to RDNA4 RT, AMD has a RT patent which features a Traversal Engine. I expect this to be featured in RDNA4.
GRAPHICS PROCESSING UNIT TRAVERSAL ENGINE
jxsDtrq.png


This is what AMD lacked in comparison to Nvidia's RT implementation.

In terms of Dual-issue, I wouldn't base that on what's happening on PC to what will happen on consoles.
 

Tqaulity

Member
I think it should have been pretty self explanatory that I was judging sony first parties efforts so far this gen, in the future they might drop some gems but looking at spiderman2 yeah lets keep our expectations grounded. Compared to prior gens thir output has been underwhelming to say the least, we are half way into next gen and so far there has been nada in terms of next gen showcases really. Everyone is more than happy to stay safe and stagnant in the cross gen bubble slightly upgrading their lastgen pipelines and calling it a day. Ratchet is nice and great in terms of io usage but in terms of visuals its a good starting point but not like the technical toure de forces we had come to expect in the halfway points of prior gens.

We had killzone2, uncharted 2, gow3 etc and so many jawdroppers on ps3 by that point and even on the ps4 which was a relatively conservative upgrade we had a easily distinguashable next gen output within a few years of the gen meanwhile look at the shit we had so far on the ps5.
No you're right there. This is MUCH slower than previous gens and we haven't had that significant showpiece title. That's more of a reflection of the state of game development today and how difficult (and time consuming) it is. Obviously Sony isn't alone here. Remasters and Remakes are all the rage because they are easier and faster (and cheaper) than building a game from scratch. We're seeing devs that used to pump out new games every 2-3 years now taking twice as long to even have something to show in some cases. It's sad and unfortunate.

But just pointing out that based on what has actually released, we don't have much to judge in terms of how far Sony studios will go in terms of pushing the technology envelope. The few examples we do have show them pushing further than anyone else, but as a whole it's still very much TBD 3 years in. Hopefully Spider Man 2 will offer some hints once we get the full game, but yeah I wouldn't expect anything revolutionary there either.
 
Top Bottom