24 TF only?
24 TF only?
Yup. Is that 24 calculated with dual issue or not? Sounds like he pulled numbers off a forum. LolSeems to be mixing up GDDR6 bandwidth with DDR4. Looks like he is picking bits of info from different insiders...
Will this thing have big L3 cache for the cpu? 3D-V cache? Infinity cache for the gpu?
Ok yes it's silly to speculate over speculation I get it. But IF these specs turned out to be true, let's think about how you could get to 24TFLOPs with only a 54 CU part. The rumored 7700XT with 54 CUs clocked at 2.6Ghz boost is only ~18 TFLOPs. That's already a pretty heft clock increase over the base PS5. How do they get another 33% TFLOP increase with that config? Possibly an "astronomical" clock speed (i.e >3Ghz)? Maybe the bits from RDNA 4 provides some IPC gains not seen with RDNA 3? Or maybe they use a full 60 CU part (even then, would still need a ~3Ghz clock to hit 24TFLOPs). Makes you wonder doesn't itOnly? That is higher than I expect for 54 CU's (won't be 60 usable for obvious reasons).
24 TF only?
That's a 60 CU clocked at 3.2 or slightly under GHz, which translates into 48 or so RDNA3 TFLOPS.24 TF at 60 CUs???
He's still using RDNA2 to calculate TFlops.24 TF at 60 CUs???
Don't read into that guy... that shit reeks of BS to me. Seem like he is just trying to jump on the leaks bandwagon for attention.These are close to the core rumors leaks but seem a bit optimistic. 20 GB of RAM and 24 TFLOPs? Bring it on! This would be awesome and on the upper end of what Sony could do for a reasonable price by next year.
Ok yes it's silly to speculate over speculation I get it. But IF these specs turned out to be true, let's think about how you could get to 24TFLOPs with only a 54 CU part. The rumored 7700XT with 54 CUs clocked at 2.6Ghz boost is only ~18 TFLOPs. That's already a pretty heft clock increase over the base PS5. How do they get another 33% TFLOP increase with that config? Possibly an "astronomical" clock speed (i.e >3Ghz)? Maybe the bits from RDNA 4 provides some IPC gains not seen with RDNA 3? Or maybe they use a full 60 CU part (even then, would still need a ~3Ghz clock to hit 24TFLOPs). Makes you wonder doesn't it
At the end of the day, I would love 24TFLOPs however they get there. Make it happen Sony!
24 TF at 60 CUs???
24 TF only?
Oh damn, hello fellow 40s crew! And yeah, if this thing is not going to go ballistic with the disk spinning up every 20 min I am pre-ordering.Would be interesting. I know myself and mates are 46 - 47 and my wife joined the party in the last few years at 45 so although we are not the majority I do know that a lot will be in between 35 and 50 and will have a fair amount of disposable income to throw at a pro. I know myself and good friend will pre-order day 1 and I imagine there are a lot like us.
Yes, if this dude told me the earth was spherical i'd believe it was flat.I thought that guy was known for just making up stuff?
Yeah, considering you can get a good PCIE Gen 4 4TB for line $250-270, no point of going over 1TB built in. Save on coats toward other components.I keep reading this, but it seems a tad unrealistic. If there's one relatively unimportant component where they could cut costs to hit their target, this is it. I also image that those upgrading to a pro will most likely be the "hardcore" users which are even more likely to have expanded their storage with a third party SSD they will just carry over. Just seems like a big waste of money that could have gone to another component or be used to reduce the retail price.
Refreshing as well, there's around a 20% increase in power efficiency over N5, I think the Pro will be around 350-400w total system power consumption.4nm is quite surprising
That would be either a big or loud console. I would expect active power use in gaming to be like 250w maybe 300w absolute max.Refreshing as well, there's around a 20% increase in power efficiency over N5, I think the Pro will be around 350-400w total system power consumption.
That would be either a big or loud console. I would expect active power use in gaming to be like 250w maybe 300w absolute max.
Yup.. its no mystery actually. We already have detailed specs of the GPU this is going to be based on.Agree, I think average power consumption will be around that ball park, PS5 has a total power draw of 350w but on operation it's around 200w and fluctuates.
Can you really compare an APU to a dGPU?Yup.. its no mystery actually. We already have detailed specs of the GPU this is going to be based on.
That GPU is already rated to be a 200W GPU. On 5nm. And if its on 4nm? Then it wud be drawing even less power than that. And has a game and boost clock of 2400 and 2600 respectively. PS5pro can sit in the middle, so... 2500Mhz? And will obviously not have the massive 64MB infinity cache that GPU has. You throw in the CPU, I can see the PS5pro APU pulling no more than 230w under load. The system would still need a 300w+ PSU though.
24 TF only?
Yes, we can compare it. Its not going to be identical... but that is the best GPU to use to compare or extrapolate on what the PS5pro could be. Especially if you know what the architectural carryovers are. And if we end up being off on just clocks... thats still pretty damn close.Can you really compare an APU to a dGPU?
PS5 has the same "hardware" as the 6700, but has the performance of a 6600xt due to lower clocks.
7700XT | PS5pro | |
Fab Node | 5nm | 4nm/5nm |
Compute Units | 54 | 54 |
Clock (game/boost) | 2400Mhz/2600Mhz | 2500Mhz |
FP32 (RDNA3 32) | 17.9TF (35.9TF) | 17.2TF (34.5TF) |
Cache (l2/l3) | 2MB/64MB | 6MB/ - |
TDP | 200W | 200W+ (CPU + GPU) |
Mem Bus | 192bit | 256bit |
PS5 has the same "hardware" as the 6700, but has the performance of a 6600xt due to lower clocks.
Returnal runs on high 1440p 60 on a 6600xt. Ps5 it runs at medium 1080p 60.When has the 6600xt ever matched a PS5?
Returnal runs on high 1440p 60 on a 6600xt. Ps5 it runs at medium 1080p 60.
First off...24TF is not only...
second, that tweet doesn't make sense to me. I can't think of a RAM config that gets you those numbers.
Yes, we can compare it. Its not going to be identical... but that is the best GPU to use to compare or extrapolate on what the PS5pro could be. Especially if you know what the architectural carryovers are. And if we end up being off on just clocks... thats still pretty damn close.
7700XT PS5pro Fab Node 5nm 4nm/5nm Compute Units 54 54 Clock (game/boost) 2400Mhz/2600Mhz 2500Mhz FP32 (RDNA3 32) 17.9TF (35.9TF) 17.2TF (34.5TF) Cache (l2/l3) 2MB/64MB 6MB/ - TDP 200W 200W+ (CPU + GPU) Mem Bus 192bit 256bit
Trust me, he knows absolutely zero. Always just combining various rumors.I thought that guy was known for just making up stuff?
Yes. That is about the only thing to be certain of. At worst, PS5pro will be using RDNA3, which is already out now, so a safer bet will be to say it will use RDNA3+. Basically something in between 3 and 4.Honestly if it is to be 24tf I'd be extremely happy lol...
I'm a bit out the loop with GPU CPU tech these days but is the latest RDNA potentially being used in PS5Pro? How does that double it's TF output? I know I could Google it but what would be the point of discussion!!!
Not sure about this one. I am sure we all would love it to use Zen 4 and all that. But in truth, I personally see them doing no more than increasing the CPU clock and increasing the CPU cache. Its lack of cache is the reason its underperforming. If they were to throw in Zen 4 cores with its current cache structure,it would be just as bad as it is now.You think Pro will use a newer CPU???
You think Pro will use a newer CPU???
Yes. That is about the only thing to be certain of. At worst, PS5pro will be using RDNA3, which is already out now, so a safer bet will be to say it will use RDNA3+. Basically something in between 3 and 4.
Since you say you are out of the loop, I wi give a simple explanation of TF doubling in RDNA3. In RDNA3, each compute unit can process two instructions instead of one. But only if the GPU driver can sort out which instructions best fit into this get-two-per-clock thing. That is the most basic way I can explain it.
A slightly more detailed (and complex way), each CU has 64 shaders aka. Alu`s. The SIMD units that cater to those 64 shaders have the ability to handle two instructions simultaneously. The problem is that its still based on the og CU foundation. So you have hardware that can handle two instructions per clock now but with the underpinnings of something that was designed to handle one per clock. This means that at best, in real-world performance, you will never get 2x the TF number with RDNA3. Maybe you et like 60-70% more, but not 100% more. Right now, however, you would be hard-pressed o even see ike 20-30% improvements, that is because the ability to dual issue is currently very driver dependent. The SIMD has to basically be good at finding instances that would favor dual issue utilization. And AMD is not known for its great drivers.
But it's something that would ikey increase over time, or even better, something that would be easier to implement in a console.
Not sure about this one. I am sure we all would love it to use Zen 4 and all that. But in truth, I personally see them doing no more than increasing the CPU clock and increasing the CPU cache. Its lack of cache is the reason its underperforming. If they were to throw in Zen 4 cores with its current cache structure,it would be just as bad as it is now.
As PS5 runs at fixed 1080p and mostly locked 60fps we have no idea how high it could run.Returnal runs on high 1440p 60 on a 6600xt. Ps5 it runs at medium 1080p 60.
I think it should have been pretty self explanatory that I was judging sony first parties efforts so far this gen, in the future they might drop some gems but looking at spiderman2 yeah lets keep our expectations grounded. Compared to prior gens thir output has been underwhelming to say the least, we are half way into next gen and so far there has been nada in terms of next gen showcases really. Everyone is more than happy to stay safe and stagnant in the cross gen bubble slightly upgrading their lastgen pipelines and calling it a day. Ratchet is nice and great in terms of io usage but in terms of visuals its a good starting point but not like the technical toure de forces we had come to expect in the halfway points of prior gens.I'd just like to point out that your comment about Sony first parties having little ambition to push tech advancement is a little bit disingenuous given that Sony has yet to release but a small handful of new "built for PS5" games to date. We really don't know how ambitious they are and what those games will look like. All we've seen so far are Demon's Souls (arguably the best looking next gen game to date), Ratchet & Clank (arguably the best looking next gen game to date along with best utilization of Fast I/O to date), AstroBot (best Dualsense utilization yet), and Returnal (technically not 1st party when it was developed it but still one of the best uses of Dualsense, Tempest 3D audio, and SSD in a game to date). Remember both Spiderman games were PS4 ports, Horizon FW was cross gen, God of War R was cross gen, GT7 was cross gen, Naughty Dog has only released remakes and remastered so far, Sucker Punch has only done an update to GOT, and MLB of course is still cross gen.
I mean, if you're judging upcoming games based on their trailers, I wouldn't as many of the games from the PS Showcase this year are still far out. But in terms of what has actually released, all 4 are some of the most ambitious "next gen" games to date.
Yes. That is about the only thing to be certain of. At worst, PS5pro will be using RDNA3, which is already out now, so a safer bet will be to say it will use RDNA3+. Basically something in between 3 and 4.
Since you say you are out of the loop, I wi give a simple explanation of TF doubling in RDNA3. In RDNA3, each compute unit can process two instructions instead of one. But only if the GPU driver can sort out which instructions best fit into this get-two-per-clock thing. That is the most basic way I can explain it.
A slightly more detailed (and complex way), each CU has 64 shaders aka. Alu`s. The SIMD units that cater to those 64 shaders have the ability to handle two instructions simultaneously. The problem is that its still based on the og CU foundation. So you have hardware that can handle two instructions per clock now but with the underpinnings of something that was designed to handle one per clock. This means that at best, in real-world performance, you will never get 2x the TF number with RDNA3. Maybe you et like 60-70% more, but not 100% more. Right now, however, you would be hard-pressed o even see ike 20-30% improvements, that is because the ability to dual issue is currently very driver dependent. The SIMD has to basically be good at finding instances that would favor dual issue utilization. And AMD is not known for its great drivers.
But it's something that would ikey increase over time, or even better, something that would be easier to implement in a console.
Not sure about this one. I am sure we all would love it to use Zen 4 and all that. But in truth, I personally see them doing no more than increasing the CPU clock and increasing the CPU cache. Its lack of cache is the reason its underperforming. If they were to throw in Zen 4 cores with its current cache structure,it would be just as bad as it is now.
Returnal is an outlier and in almost all other benchmarks the gimped bandwidth makes the 6600xt fall behind the ps5 and most other competitors. The card is terribly designed.Returnal runs on high 1440p 60 on a 6600xt. Ps5 it runs at medium 1080p 60.
Yup. It woudbe more than adequte. Especially when you consider that all these consoles are expected to do is peak at 120fps when possible.Thanks for that I understood you clearlysounds like it has huge potential if Cerny and other engineers nail it which I think they will.
Do you think the Zen 2 CPU with a bigger cache / clock speed will be competitive even in late 2024?
No, you can get carried away. It however is not that groundbreaking, which is a good thing because it means it's more realistic.Last but not least what is Cernys secret sauce with the RT patent should we be expecting huge improvements to RT capability or not to get carried away?.
Sounds like the PS5 Pro has big potential to be a worthy mid gen refresh!!!
Yes. It's driver dependent. Devs could obviously help by using specific kinda code, which makes it easier for the driver to identify what would work with dual issue and what wouldn't. But it's primarily driver dependent. And yes, that's great for console development. Would be easier to implement in consoles than say in PCs.You're saying the dual-issue compute system is driver dependent? this would mean it requires no input from developers/programmers.
I'd be curious to see what kind of implications this would have on console, as the hardware is fixed as well as the drivers.
A more extreme case, path tracing.Yup. It woudbe more than adequte. Especially when you consider that all these consoles are expected to do is peak at 120fps when possible.
To put it into perspective, the PC equivalent of the PS5 CPU is the Ryzen 7 3700x. That CPU paired with a 2080ti can run Fortnite in 1080p at over 210fps. Metro [email protected]. And the only thing that is different from that CPU to that in the PS5, is the PS5s is clocked lower, and has only like I think 8MB of cache vs the 32MB in PC version of the chip.
No, you can get carried away. It however is not that groundbreaking, which is a good thing because it means it's more realistic.
Cerny's patent basically would make an AMD GPU handle RT in the exact same way Nvidia and IntelGPUs handle RT. Currently, AMD GPUs use shaders (CUs) to handle BVH traversal. Whereas, Nvidia and ntelGPUs have BVH traversal handled by the RT cores too. That is why AMD GPUs are crap at RT. This patent basically catches AMD up. Some context of how excited you should at least be?
Take Hogwarts. An extreme case, but shows how messed up it is.
No RT. 1440p.
7900XTX = 112fps
4080 = 109fps
3070 = 54fps
Intel A770 = 42fps
RT Ultra. 1440p
4080 = 52fps
3070 = 24fps
Intel a770 = 20.5fps
7900xtx = 15fps
Yup, this is why we need BVH acceleration. RT at ultra cuts the performance of every other GPU by nearly exactly half... but it completely decimates that of the AMD GPU. Dropping performance by over 80%.
Yes. It's driver dependent. Devs could obviously help by using specific kinda code, which makes it easier for the driver to identify what would work with dual issue and what wouldn't. But it's primarily driver dependent. And yes, that's great for console development. Would be easier to implement in consoles than say in PCs.
As it stands, RDNA3 dual issue requires the compiler to find instances where this thing works. And how well the compiler can do that is driver based. But even if the compiler is working at its peak, and the drivers are perfect, unless certain things are changed in RDNA3+ and above, dual issue compute is never going to really give you 100% compute improvement. But more like 50-70%..... at times.... if you are lucky... maybe.
No you're right there. This is MUCH slower than previous gens and we haven't had that significant showpiece title. That's more of a reflection of the state of game development today and how difficult (and time consuming) it is. Obviously Sony isn't alone here. Remasters and Remakes are all the rage because they are easier and faster (and cheaper) than building a game from scratch. We're seeing devs that used to pump out new games every 2-3 years now taking twice as long to even have something to show in some cases. It's sad and unfortunate.I think it should have been pretty self explanatory that I was judging sony first parties efforts so far this gen, in the future they might drop some gems but looking at spiderman2 yeah lets keep our expectations grounded. Compared to prior gens thir output has been underwhelming to say the least, we are half way into next gen and so far there has been nada in terms of next gen showcases really. Everyone is more than happy to stay safe and stagnant in the cross gen bubble slightly upgrading their lastgen pipelines and calling it a day. Ratchet is nice and great in terms of io usage but in terms of visuals its a good starting point but not like the technical toure de forces we had come to expect in the halfway points of prior gens.
We had killzone2, uncharted 2, gow3 etc and so many jawdroppers on ps3 by that point and even on the ps4 which was a relatively conservative upgrade we had a easily distinguashable next gen output within a few years of the gen meanwhile look at the shit we had so far on the ps5.
I doubt it. For one reason,Refreshing as well, there's around a 20% increase in power efficiency over N5, I think the Pro will be around 350-400w total system power consumption.