Panajev2001a
GAF's Pleasant Genius
It will not, unless you choose to get the 10% CPU clock boost.Also, the fact that the GPU will be clocked lower than vanilla PS5 for Pro patched games is very strange
It will not, unless you choose to get the 10% CPU clock boost.Also, the fact that the GPU will be clocked lower than vanilla PS5 for Pro patched games is very strange
I am a huge fan of Mark Cerny, but this is not his best work.
Do you see the 67ms frame-time spike and the CPU threads going from 15% to 84%? This is bad CPU optimization. A 30% better CPU wouldn't help the Pro here. And again consoles (notably PS5) have very different APIs than PC. We have seen games with plenty of "CPU" problems on PC running fine on PS5.Talking about CPU limited games:
![]()
Half of 3080 is used here, game it bottlenecked by 5800x3D - cpu that is much better than Zen 2.
I mean this example falls flat here. rog ally/steam deck etc. are extremely GPU limited. you're talking about 4x ray tracing improvements but then come up with cpu not being a meaningful difference between rog ally and steam deck. isn't rog ally like a gtx 1650? of course it won't see much of a meaningful difference from a zen 4 cpu or zen 2 cpu
sony should've at least have the decency of putting a zen 3 there. it would turn unstable 30 fps CPU bound games to rock solid 30 FPS modes and would allow 40 FPS mode to be a thing, or maybe unlocked 45 FPS
10:40
regardless, notice how limiting 3600 is for 3060ti here in novigrad/witcher 3/ray tracing as well and see how much zen 3 with higher clocks helps with that. this 3600 literally drops below 30 fps in novigrad with ray tracing. zen 3 is a must for ray tracing for 3rd party games if you ask me.
I guess this was always going to happen, and that its happening now is probably a sign that we are at least going in the right direction. Albeit 5-6 years late.But what exactly is wrong with their opinions, this PRO compared to the PS4 Pro is such a tiny margin in quality jump, just porves how useless it is. The sales of the PS4 Pro were 1/10th of the regular PS4. The way things are going, this is gonna sell half that.
Bottlenecked by shitty optimization, not the cpu
In a scene that has nothing special to render, such low performance can only be lack of optimization.
It's not like there is a huge amount of detail. Or a large amount of NPCs. complex physics. etc.
When devs suck so much at optimizing their own game, no amount of CPU power ill ever be enough.
Do you see the 67ms frame-time spike and the CPU threads going from 15% to 84%? This is bad CPU optimization. A 30% better CPU wouldn't help the Pro here. And again consoles (notably PS5) have very different APIs than PC. We have seen games with plenty of "CPU" problems on PC running fine on PS5.
1.5% only if you enable the high frequency CPU mode. So DF is off their rocker here…It's weird to think that DF crew thinks the Pro design is like the Series X,the clock difference between the base PS5 and the Pro is like 2%
1.5% only if you enable the high frequency CPU mode. So DF is off their rocker here…
Even if… whoa… slow and wideActually no, PS5 Pro GPU seems to be clocked at 2.18 Ghz, -2.2% compared to the 2.23 Ghz of the base model
Unless the 35.5 Tflops number is wrong
Has anyone leaked if Cerny is the architect of this system?
It's weird to think that DF crew thinks the Pro design is like the Series X,the clock difference between the base PS5 and the Pro is like 2%
They believe it's because of two main reasons: (1) clock speed changes and; (2) architecture size.
”PS5 Pro only has limited clock speed increases (or actual decreases potentially) and the size of the GPU architecturally has not doubled in the way it did with PS4 Pro”
CPU clock uplift is just 0.3ghz while calculated GPU clock speed (via stated TF figures) may actually be lower than standard PS5.
"What's curious is that Sony's stated teraflops figure suggests a peak GPU clock speed of 2.18GHz - which is actually a touch slower than the 2.23GHz in the standard PS5. Again, this does suggest either a conservative power limit, retaining the 6nm silicon process technology - or both"
PS5 runs this game like shit. And supposed Pro CPU uplift is 10%.
So, what’s the big idea here? Upgrading to a top of the line CPU isn’t doing much, is it? Do you want Sony to try and design a monster tower 4.000€ PC so people that can’t multi thread has more room in their CPU budget and still fail? People do get that this stuff still has to sell to a somewhat mass market appeal price, do they?Dragon's Dogma 2 currently has high demands on hardware, with the Steam Deck struggling to run at playable rates and even high-end machines of an RTX4090 paired with an AMD 5800X3D CPU can drop into the 30s whilst in the denser populated towns.
![]()
![]()
DF have the tech credentials of a crackhead superhero team.
The 33.5 TF figure is dual-issue, and to reach it you'd need a GPU clock of about 2.34 GHz. 56 CUs * 64 Shader Cores * 2 * 2.34 GHz = 16.7 TF * 2 (dual-issue mode) = 33.5 TF.
Exactly the figure in the papers.
So a supposed "tech-literate" channel can't even calculate TFs for AMD GPUs or tell the difference between single-issue and dual-issue TFs? Or know that if it's a 60 CU chips then 4 CUs are likely disabled for yield purposes because that part of the APU is likely monolithic?
Like did they "conveniently" forget this just to push a smear campaign against the Pro using FUD? Not a single person corrected them that they should've calculated the numbers by disabling four CUs like with the PS5 & Series X? I'm pretty sure people like Kepler were saying there'd be disabled CUs on the chip so not all 60 are active.
What a clown show DF have become.
DF have the tech credentials of a crackhead superhero team.
The 33.5 TF figure is dual-issue, and to reach it you'd need a GPU clock of about 2.34 GHz. 56 CUs * 64 Shader Cores * 2 * 2.34 GHz = 16.7 TF * 2 (dual-issue mode) = 33.5 TF.
Exactly the figure in the papers.
So a supposed "tech-literate" channel can't even calculate TFs for AMD GPUs or tell the difference between single-issue and dual-issue TFs? Or know that if it's a 60 CU chips then 4 CUs are likely disabled for yield purposes because that part of the APU is likely monolithic?
Like did they "conveniently" forget this just to push a smear campaign against the Pro using FUD? Not a single person corrected them that they should've calculated the numbers by disabling four CUs like with the PS5 & Series X? I'm pretty sure people like Kepler were saying there'd be disabled CUs on the chip so not all 60 are active.
What a clown show DF have become.
Funny that you are so condescending in this post when the leaks say it is a 64/60 design. So DF were right on the money.DF have the tech credentials of a crackhead superhero team.
The 33.5 TF figure is dual-issue, and to reach it you'd need a GPU clock of about 2.34 GHz. 56 CUs * 64 Shader Cores * 2 * 2.34 GHz = 16.7 TF * 2 (dual-issue mode) = 33.5 TF.
Exactly the figure in the papers.
So a supposed "tech-literate" channel can't even calculate TFs for AMD GPUs or tell the difference between single-issue and dual-issue TFs? Or know that if it's a 60 CU chips then 4 CUs are likely disabled for yield purposes because that part of the APU is likely monolithic?
Like did they "conveniently" forget this just to push a smear campaign against the Pro using FUD? Not a single person corrected them that they should've calculated the numbers by disabling four CUs like with the PS5 & Series X? I'm pretty sure people like Kepler were saying there'd be disabled CUs on the chip so not all 60 are active.
What a clown show DF have become.
A PC CPU runs this like shit. 50% overclock wouldn't be enough for it to run at 60fps on console. A 5800x3d wouldn't be enough for it to be a solid 60fps game on PS5 Pro. Sony design is perfectly fine with their goal.Obviously Capcom is to blame here, game runs terrible on all common hardware and doesn't look next gen at all. But what we can do? Games like that happen and and pro console won't help them in any way.
If GTA6 is limited like that it will be stuck to 30FPS, and GTA is doing FAR more than this game...
PS5 runs this game like shit. And supposed Pro CPU uplift is 10%.
A PC CPU runs this like shit. 50% overclock wouldn't be enough for it to run at 60fps on console. A 5800x3d wouldn't be enough for it to be a solid 60fps game on PS5 Pro. Sony design is perfectly fine with their goal.
But that's not what DF are saying. they are saying they are disappointed by the 10% overclock saying it's not enough to double FPS. It's just unrealistic goal if you need a $500 CPU upgrade to not even accomplish that for one unique unoptimized game out of 500 games. Just BS narrative from them. Look at how the CPU threads are running, 50% average for both CPU and GPU with very disparate min and max on CPU threads. The game is just badly optimized.
Did they complain like crazy when their X1X was still using that piece of shit of Jaguar? did they made paragraphs, articles about articles about those "CPU limited games" that will plague the X1X generation and limit its potential? they are just Xbox and PCMR warriors out there to play with an unique goal to spread FUD after FUD against Sony next console. They are now 100% doing that and they'll do that until next Xbox. They are just become full propagandists for Microsoft. It's pathetic.
Damn, people in here are really trying to suggest that an 8/16 Zen 2 cpu is the limiting factor when going from 30 => 60 fps. I can see a potential issue at 120fps but there's no way, unless a game is terribly unoptimised that the PS5 cpu is going to hold back anything at such low frame rates.
Did they complain like crazy when their X1X was still using that piece of shit of Jaguar? did they made paragraphs, articles about articles about those "CPU limited games" that will plague the X1X generation and limit its potential? they are just Xbox and PCMR warriors out there to play with an unique goal to spread FUD after FUD against Sony next console. They are now 100% doing that and they'll do that until next Xbox. They are just become full propagandists for Microsoft. It's pathetic.
Did they complain like crazy when their X1X was still using that piece of shit of Jaguar? did they made paragraphs, articles about articles about those "CPU limited games" that will plague the X1X generation and limit its potential? they are just Xbox and PCMR warriors out there to play with an unique goal to spread FUD after FUD against Sony next console. They are now 100% doing that and they'll do that until next Xbox. They are just become full propagandists for Microsoft. It's pathetic.
Series sWhy in your opinion is Starfield 30fps on Xbox but any half decent CPU from the last five years can run it at 60+ ?
Lmao no way??Actually they parrotted that "Jaguar Evolved" thing that Microsoft wrote in the spec sheet for X1X
DF have the tech credentials of a crackhead superhero team.
The 33.5 TF figure is dual-issue, and to reach it you'd need a GPU clock of about 2.34 GHz. 56 CUs * 64 Shader Cores * 2 * 2.34 GHz = 16.7 TF * 2 (dual-issue mode) = 33.5 TF.
Exactly the figure in the papers.
So a supposed "tech-literate" channel can't even calculate TFs for AMD GPUs or tell the difference between single-issue and dual-issue TFs? Or know that if it's a 60 CU chips then 4 CUs are likely disabled for yield purposes because that part of the APU is likely monolithic?
Like did they "conveniently" forget this just to push a smear campaign against the Pro using FUD? Not a single person corrected them that they should've calculated the numbers by disabling four CUs like with the PS5 & Series X? I'm pretty sure people like Kepler were saying there'd be disabled CUs on the chip so not all 60 are active.
What a clown show DF have become.
Relax bro, it was a single section in a video about how the CPU is a bit underwhelming. Not a fucking witch-hunt.A PC CPU runs this like shit. 50% overclock wouldn't be enough for it to run at 60fps on console. A 5800x3d wouldn't be enough for it to be a solid 60fps game on PS5 Pro. Sony design is perfectly fine with their goal.
But that's not what DF are saying. they are saying they are disappointed by the 10% overclock saying it's not enough to double FPS. It's just unrealistic goal if you need a $500 CPU upgrade to not even accomplish that for one unique unoptimized game out of 500 games. Just BS narrative from them. Look at how the CPU threads are running, 50% average for both CPU and GPU with very disparate min and max on CPU threads. The game is just badly optimized.
Did they complain like crazy when their X1X was still using that piece of shit of Jaguar? did they made paragraphs, articles about articles about those "CPU limited games" that will plague the X1X generation and limit its potential? they are just Xbox and PCMR warriors out there to play with an unique goal to spread FUD after FUD against Sony next console. They are now 100% doing that and they'll do that until next Xbox. They are just become full propagandists for Microsoft. It's pathetic.
Again,the Pro is likely to have 64 CUs work because you have 60 CUs work 2 shader engines if that's real.DF have the tech credentials of a crackhead superhero team.
The 33.5 TF figure is dual-issue, and to reach it you'd need a GPU clock of about 2.34 GHz. 56 CUs * 64 Shader Cores * 2 * 2.34 GHz = 16.7 TF * 2 (dual-issue mode) = 33.5 TF.
Exactly the figure in the papers.
So a supposed "tech-literate" channel can't even calculate TFs for AMD GPUs or tell the difference between single-issue and dual-issue TFs? Or know that if it's a 60 CU chips then 4 CUs are likely disabled for yield purposes because that part of the APU is likely monolithic?
Like did they "conveniently" forget this just to push a smear campaign against the Pro using FUD? Not a single person corrected them that they should've calculated the numbers by disabling four CUs like with the PS5 & Series X? I'm pretty sure people like Kepler were saying there'd be disabled CUs on the chip so not all 60 are active.
What a clown show DF have become.
Lmao no way??
Christ, can’t even make this shit up.Find the differences:
![]()
Inside the next Xbox: Project Scorpio tech revealed
Digital Foundry has the specs, has seen it running, and has talked to the people who built it.www.eurogamer.net
"All signs point to the upclocked Jaguar cores we find in Xbox One, and Scorpio's CPU set-up is indeed an evolution of that tech, but subject to extensive customisation and the offloading of key tasks to dedicated hardware."
Scorpio is console hardware pushed to a new level
Opinion and analysis from Digital Foundry's Rich Leadbetter.
![]()
Scorpio is console hardware pushed to a new level
Opinion and analysis from Digital Foundry's Rich Leadbetter, based on our exclusive deep dive with Microsoft on the next Xbox's tech.www.eurogamer.net
The only difference is that you are emotionally invested in the Sony console and it hurts your feelings when they are not impressed by the Pro specs.Find the differences:
![]()
Inside the next Xbox: Project Scorpio tech revealed
Digital Foundry has the specs, has seen it running, and has talked to the people who built it.www.eurogamer.net
"All signs point to the upclocked Jaguar cores we find in Xbox One, and Scorpio's CPU set-up is indeed an evolution of that tech, but subject to extensive customisation and the offloading of key tasks to dedicated hardware."
Scorpio is console hardware pushed to a new level
Opinion and analysis from Digital Foundry's Rich Leadbetter.
![]()
Scorpio is console hardware pushed to a new level
Opinion and analysis from Digital Foundry's Rich Leadbetter, based on our exclusive deep dive with Microsoft on the next Xbox's tech.www.eurogamer.net
The PS5 has shared memory so neither the CPU or the GPU has to decompress anything, the custom hardware does it. You think the engineers just put the custom hardware in the PS5 for nothing?The PC doesn't have custom hardware for that. Just pointing out you don't need to use the CPU for it. Decompression can be done on the GPU.
Your words, not theirs...Now PS5 Pro is a piece of shit, of course...
They are not getting an exclusive teardown from Sony. Cerny doesn't give a fuck about Digital Foundry
Series s
The pro had one job and one job only double the framerate of each mode if it can’t do that it’s a failure of a console that shouldn’t be supportedThis entire post is damage control. My point stands.
No. I never said anything of the sort. The I/O block is a great addition to the PS5 that allows for rapid decompression of assets. But you don't need 9 CPU cores on PC to do that, it can be done on the GPU with minimal overhead.The PS5 has shared memory so neither the CPU or the GPU has to decompress anything, the custom hardware does it. You think the engineers just put the custom hardware in the PS5 for nothing?
Why in your opinion is Starfield 30fps on Xbox but any half decent CPU from the last five years can run it at 60+ ?
Are you really gonna hit me with Creation Engine 2 here. Steve from gamers nexus released an excellent video analysing cpu bottlenecks in Starfield. It was indeed pretty damning. A 4090 paired with a AMD 3600 cpu lost around 50% of its perf (though still > 60fps). It’s only a 6/12 chip but likely somewhat comparable to the Zen 2 8/16 with lower clocks. This was an extreme scenario running at the lowest in-game settings. The More you stress the GPU the less the CPU becomes the bottleneck. That’s why games when benchmarking different CPUs show little difference at higher resolutions.
With the above, the majority of games are nowhere near as heavy on the cpu as Starfield. For the vast majority of games, the cpu is not the limiting factor going from 30fps => 60fps. It’s almost always GPU limited. Why is that statement controversial at all?
Microsoft Xbox One X: the Digital Foundry verdict
The workmanship that's gone into Microsoft's latest console is exceptional. To quadruple graphics power over the original model but to retain essentially the same form factor and the same acoustics points to a level of engineering that really does take console design to the next level. Xbox One X is a beautifully designed little box that does the job assigned to it without taking up much space or making much noise - the latter being our biggest bugbear with PlayStation 4.
Beyond that, what we can definitely say is that the machine is a love letter to the core gamer, with many forward-looking features. The implementation of FreeSync support - something we didn't have time to fully test - is the kind of feature we didn't expect to see until at least the next console generation. Meanwhile, the backwards compatibility features really are superb - if you've stayed with Xbox across the generations, you're in for a real treat here. There's a sense that Microsoft is paying homage to its roots, honouring its past successes and making genuine efforts in curating a great library - all at no cost to the user.
Now PS5 Pro is a piece of shit, of course...
They are not getting an exclusive teardown from Sony as Cerny doesn't give a fuck about Digital Foundry
No mention at all of Xbox one X’s biggest bottleneck, the CPU they didn’t change…..
It decompress at the speed of 9 Zen-2 CPU cores doesn't mean you "need" 9 cores to decompress. Still the point being, you cannot compare apples to apples between PC and PS5 with similar cpu/gpu specs. PS5 is much more efficient with its custom hardware designed specifically for gaming workloads. This includes the PS5's SOC and motherboard design as well.No. I never said anything of the sort. The I/O block is a great addition to the PS5 that allows for rapid decompression of assets. But you don't need 9 CPU cores on PC to do that, it can be done on the GPU with minimal overhead.
When PS1 launched in 1994, its R3000 CPU@33mhz was about 5x slower than the fastest P5 Pentium (100mhz) available.Please do. I know that the NES had a really old processor but the goal was to be cheap, not bleeding edge. No idea what you are talking about in the PS1 era. I read that it had many processors, but nothing else.
I don't have direct comparison with this gen consoles as never benchmarked a PS5 hands-on. But I also take issue with benchmarks taken on different codebase and different architecture - that just can't be a reliable methodology if the goal is to measure architecture performance.@Fafalada maybe some of what you are saying regarding cache optimization is true on first-party, but third party games have been tested on the console CPUs running the standard PC version and the results are essentially identical to what we see on console. Not a lot of secret sauce and/or magic optimization to be found there. In fact, most evidence points to CPUs running worse on GDDR6 than DDR4 due to latency issues.
Bigger change from? Remember Pro is a new design not a shrink so might as well invest the extra R&D for cost savings down the line on cooling and die sizeI think it is the Slim coming for free, optimising for 6 nm is still a cost. I would not trust this process of porting the design to it to be completely free.
Anyways, doing the entire design with 5nm would be a bigger change and I do not think Sony thinks it is worth that cost. Given what they have designed / the target they have (unless it is crazy expensive) they may be right.
All of this is correct the could shove a 4090 in here it won’t matter cause of that cpu what an awful consolethey will be happy because at least they're not being forced to optimize extreme CPU bound code to hit 30 fps on 1.6 ghz jaguar cores
but they will have no trouble targeting the very same 30 fps on zen 2 cores as well. which is why some people are getting worked up. if ps5 pro focused on CPU upgrade while keeping GPU more or less similar or with slight upgrade + big upgrade on upscaling, it would've been better for high framerate enjoyers
i dont really care about playing games at 30 fps or 60 fps so i dont actually care what ps5 pro ends up with. but it is fun to participate in discussions regardless. if you have to ask me though, I'd prefer more balanced builds rather than xbox one x-like builds where the focus is on resolution and graphics (despite myself building a PC that has the mindset of xbox one x but that I'm just an odd person overall).
d it. If you told them you built a ryzen 3600 rtx 4070 rig, they would probably be like "but that cpu will hold that gpu back, why didnt you get something decent, modern that can accompany 4070 properly. but if sony does it, cerny is a genius, ps5 is not cpu limited at all, rules are different, spiderman runs at this framerate, 3rd party suck, gta 6 sucks anyways, etc. etc.
it is like someone with 3600 and 4070 getting 80+ fps in spiderman and saying game is optimized and their rig is fine. and when they heavily get bottlenecked in jedi survivor, they blame the developer. when in reality, they could've solved the bottleneck by pairing that 4070 with at least a ryzen 7600. that is the core of the problem. sometimes you gotta give the GPU the CPU it deserves. otherwise you're just limiting the build to specific resolution/framerate parameters. which is okay by itself. you can still get great mileage out of that GPU. but you still squander potential for high framerate experiences
with a 3600 and a 4070, you won't be CPU limited at 4k in a vast majority of titles especially the ones that are released before 2021. more so if you just push ultra settings and 4070 can take it too. but then you try jedi survivor, hogwarts legacy, dragons dogma and quickly realize this CPU is simply not meant for 4070 unless you specifically gimp 4070 and push extreme graphical settings that target 30 FPS. all that because the CPU can't keep up. where is the sense in that? why sacrifice 4k/optimized settings/dlss quality 60 fps experience and go for native 4k, unoptimized ultra ray tracing settings just so that you can saturate GPU at 30 fps target? it is what Cerny is trying to do here by keeping the same CPU and giving the GPU a great ray tracing and mild raster improvements. it is the exact same logic they had with xbox one x and ps4 pro. it is a mistaken approach. but they keep doing it. because people have no trouble with 30 fps indeed on consoles
You realize DF did a test using the exact XSX CPU think its called the 4700s kit) and paired it with a 6700 GPU and ran Starfield right? And the game was GPU bottlenecked on that set up not CPU bottlenecked. And ran an average of 40fps and in some cases even 60fps. Using FSR quality 4K. When they switched it to FSR quality 1440p, it was mostly 60fps everywhere.Why in your opinion is Starfield 30fps on Xbox but any half decent CPU from the last five years can run it at 60+ ?
bullshit lol. xbox will be dropping more info on their nextbox around the launch of this machine.Only about 8 more months of this back and forth
![]()