• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

PS5 Pro devkits arrive at third-party studios, Sony expects Pro specs to leak

winjer

Gold Member
You see what I mean... you have been reading to argue as opposed to reading to understand.

From the beginning of this I had made it clear that there will be a cache increase.

What I have disagreed with from you was all that nonsense about chiplet and as much as 64MB cach3e.

Again... pls READ. or do you need me to bold where I said increase cache from 8-16MB? Or should I quote myself again from atr least 2 other times I have mentioned the cache increase?

In this second post I even bolded where I said it.

I said that it would need a more robust memory subsystem. Be it more cache, more channels or clock speeds.
I argued for chiplets and said that a chunk of L3 would work very well, to improve performance and lower power usage. And I gave the example that the 7800XT, which doesn't have to share bandwidth with a CPU, has a 256 bit bus, 19.5 Gbps memory and has 64MB of L3 cache.
You started by saying that 16gbps memory was enough, to now change it to 19gbps.
Even with double the L2 cache, which RDNA2 doesn't do, it will not be enough if we are to consider a GPU that could be twice as fast as the PS5.
The 8MB of L2 you suggest for the GPU would probably mean a hit ratio under 20%. That is not enough.
Even a lousy RX6400 had 16Mb of 1Mb of L2 + 16MB of L3. This is a 3.565 TFLOPS GPU.

DSW9vZ3.jpg
 

twilo99

Member
For me image quality is more important then higher frame rate (most of the times). So I’m not interested in 120fps. 60 maybe, but it’s not a must. I would be ok with 1440 if the image quality would be good. If the 1k console is giving me only higher resolution and more frames then it’s not my priority. What I would like would be better lighting, textures, less aggressive LOD, higher polycount models or more advanced shaders. This is where I see the improvements and proper utilization of better hardware. Unfortunately I wouldn’t received that, because of other hardware versions.

I see, but I think higher end hardware would give you gains across the board.

A GPU on the level of a 7800 XT is actually very competitive especially for a console, Zen 2 can work with some tweaks surely 😵‍💫

It's a mid gen refresh not a PS6 it doesn't need to be a killer, as for those saying it's pointless and not needed, To you maybe but to others it is definitely needed, Choice is good if you don't like it don't buy it.

Nothing worse than the "it's not needed brigade" speak for yourselves some people like performance.

They'll have to be some substantial tweaks with a sprinkle of magic sauce.. or rather dipped in it I guess
 
Last edited:

Haint

Member
And I'm sure AMD could do that if the TDP of the RTX 4080 was 215W like the 2070S...but it's rated at 320W, almost 50% higher. There's a reason new architectures take around 24 months to develop and not 12 months like at the beginning of the 8th generation of consoles.

If Sony says "fuck that" and doesn't mind a huge, power hungry monster, then RTX 4080/7900 XTX in rasterization might be be feasible (at a large premium and I actually didn't check if they ever had an APU like that) but I somehow doubt that the Pro will be pulling 300W.

Never mind that Turing sacrificed die space for tensor and RT cores.
The higher end 40 series are all comically overvolted in an effort to justify the price hikes and product stack shift. This isn't stupid or surprising as 99.9% of people buying $1300+ desktop GPUs obviously couldnt give less of a fuck about power consumption/efficiency. Undervolting to sane levels sees the 4080 deliver 95% of the performance at around 230W or so on the heaviest gaming loads.
 
Last edited:

shamoomoo

Banned
I think it's just the case that it's not as fast as people think it is. 24 gbps GDDR6 is available; yet Sony chose 18.
From what I understand,GDDR6 is more power hungry than GDDR5, I'm not sure if the bandwidth of the Pro is more than sufficient given the power increase.


We know the PS5 uses DCC and if the clocks stay the same or is faster,the onboard cache will help out some even if the capacity isn't increased over the PS5.
 

Mr.Phoenix

Member
I said that it would need a more robust memory subsystem. Be it more cache, more channels or clock speeds.
I argued for chiplets and said that a chunk of L3 would work very well, to improve performance and lower power usage. And I gave the example that the 7800XT, which doesn't have to share bandwidth with a CPU, has a 256 bit bus, 19.5 Gbps memory and has 64MB of L3 cache.
You started by saying that 16gbps memory was enough, to now change it to 19gbps.
Even with double the L2 cache, which RDNA2 doesn't do, it will not be enough if we are to consider a GPU that could be twice as fast as the PS5.
The 8MB of L2 you suggest for the GPU would probably mean a hit ratio under 20%. That is not enough.
Even a lousy RX6400 had 16Mb of 1Mb of L2 + 16MB of L3. This is a 3.565 TFLOPS GPU.

DSW9vZ3.jpg
Why are you still doing this? Do you just like to argue?

You do realize the PS5 GPU only has 4MB of L2 cache right? The XSX has 5MB. Like this whole thing is just crazy to me. You are talking about all these other things out there and ignoring the PS5 that is on the market RIGHT NOW. Like your stupid example you just gave of the RX6400 which has more L3 cache than the 10TF PS5 has TOTAL cache. smh...

And you are misquoting me and saying stuff I didn't say. While conveniently ignoring all the stuff I said earlier that negates this entire discussion.

I never said 16gbs was enough. Please quote where I said that. I can't even have said 16gbs ram chips would be enough because I don't even know if iots going to use 16gbs. first time I mentioned 16gbs was simply to letr you know that they are using faster ram chips than what was used in the of PS5. The actual rumor claims its supposed to be 18gbs. I personally don't even believe that either.

Like you keep doing this thing where you are making an argument based on what is on the PC side of things or what you think would work. Why don't you make your argument based on what sony has already done?

The current PS5 CPU has 8MB of L2 cache and the GPU has 4MB of L2 cache. You are somehow suggesting that they increase total L2 cache in the PS5 from 12MB to 64MB in the Pro to accommodate for a 1.8-2x bump in performance????

Are you nuts? TRhis is the last post I am making to you on this matter. At this point its best for us to agree to disagree and when the PS5pro is released we will see if you are right and it has either/or more than 256bit bus and anything about 24MB of L2 cache... though you said 64MB... but I am that confident it wont be more than 24MB at best of total L2 cache. 12-16MB CPU + 8MB GPU.
 
Last edited:

winjer

Gold Member
Why are you still doing this? Do you just like to argue?

You do realize the PS5 GPU only has 4MB of L2 cache right? The XSX has 5MB. Like this whole thing is just crazy to me. You are talking about all these other things out there and ignoring the PS5 that is on the market RIGHT NOW. Like your stupid example you just gave of the RX6400 which has more L3 cache than the 10TF PS5 has TOTAL cache. smh...

And you are misquoting me and saying stuff I didn't say. While conveniently ignoring all the stuff I said earlier that negates this entire discussion.

I never said 16gbs was enough. Please quote where I said that. I can't even have said 16gbs ram chips would be enough because I don't even know if iots going to use 16gbs. first time I mentioned 16gbs was simply to letr you know that they are using faster ram chips than what was used in the of PS5. The actual rumor claims its supposed to be 18gbs. I personally don't even believe that either.

Like you keep doing this thing where you are making an argument based on what is on the PC side of things or what you think would work. Why don't you make your argument based on what sony has already done?

The current PS5 CPU has 8MB of L2 cache and the GPU has 4MB of L2 cache. You are somehow suggesting that they increase total L2 cache in the PS5 from 12MB to 64MB in the Pro to accommodate for a 1.8-2x bump in performance????

Are you nuts? TRhis is the last post I am making to you on this matter. At this point its best for us to agree to disagree and when the PS5pro is released we will see if you are right and it has either/or more than 256bit bus and anything about 24MB of L2 cache... though you said 64MB... but I am that confident it wont be more than 24MB at best of total L2 cache. 12-16MB CPU + 8MB GPU.

Dude, chill out. Just because someone doesn't agree with you, doesn't mean they are out for you or something.
If you don't want to talk tech, then why did you bother quoting me in the first place. Yes, because it was you who quoted me first, twice.

Yes the PS5 GPU has 4MB of L2, but it also has 448GB/s. And even with that, it requires things like Anisotropic filtering to be at low values, because the console is limited by memory bandwidth. Something that is almost free on any PC GPU in the last decade.
But if we get a PS5 Pro with only 18Gbps, it will not be enough. And the 8MB of L2 will do very little for hit rates, so the GPU will still have to go plenty of times to memory.
You even ignore the official graph from AMD that I posted about cache size vs hit rates.

And stop pretending that a console has some magic that makes it different from what a PC GPU has in memory bandwidth.
There is a reason why consoles have to reduce graphical options that are heavy on memory bandwidth.
 
If you ask me, I think the problem is that people are comparing apples to oranges. As far as consoles go, if we are looking at the current gen, the PS5/XSX are "high-end". Consoles. The XSS is a low-end/entry-level console. The key word is, "console". Why anyone tries to compare them to PC gaming hardware is ridiculous.

To me, there is PC gaming, and there is console gaming. If you want the most powerful console you can buy, right now, that is the PS5/XSX. That's a high-end console. Because it's possible to get a PC with specs lower than what a console can give you.

Nothing is really wrong with FSR2. I mean it's not as good as AI-based reconstruction, but it's good enough. The problem is as you said, the ridiculously low resolutions devs are using as a base for it.

If I would ever pray for anything, its that platform holders set certain standards, or minimum performance presets for a game to even get certified. Eg. They should make the minimum base reconstruction rez for 30fps 1440p and minimum for 60fps 1080p.

I agree with that 100% but Sony and MS have shown they don't like doing this, even though it would be great for gamers, which is very unfortunate. Can't see them changing now.
 
From what I understand,GDDR6 is more power hungry than GDDR5, I'm not sure if the bandwidth of the Pro is more than sufficient given the power increase.

The PS5 does use GDDR6. But yes power consumption was likely a consideration as to why they chose 18 gbps for the Pro over higher speeds.
 
People really think the PS5 Pro is going to have a 4080 equivalent??? Jesus this thread is nuts 🤣🤣🤣

It wIlL bE TwO YEarS OlD... Yes it's also incredible hardware that even in a years time will still be top tier.

I'm seeing Zen 4/5 thrown around a lot too... It's not happening guys calm down.

A GPU on the level of a 7800 XT is actually very competitive especially for a console, Zen 2 can work with some tweaks surely 😵‍💫

It's a mid gen refresh not a PS6 it doesn't need to be a killer, as for those saying it's pointless and not needed, To you maybe but to others it is definitely needed, Choice is good if you don't like it don't buy it.

Nothing worse than the "it's not needed brigade" speak for yourselves some people like performance.
Rasterization is very different from raytracing no one expects the pro to anywhere equal the 4080 on rt only raster
 
People really think the PS5 Pro is going to have a 4080 equivalent??? Jesus this thread is nuts 🤣🤣🤣

It wIlL bE TwO YEarS OlD... Yes it's also incredible hardware that even in a years time will still be top tier.

I'm seeing Zen 4/5 thrown around a lot too... It's not happening guys calm down.

A GPU on the level of a 7800 XT is actually very competitive especially for a console, Zen 2 can work with some tweaks surely 😵‍💫

It's a mid gen refresh not a PS6 it doesn't need to be a killer, as for those saying it's pointless and not needed, To you maybe but to others it is definitely needed, Choice is good if you don't like it don't buy it.

Nothing worse than the "it's not needed brigade" speak for yourselves some people like performance.
4080 rasterization levels in highly optimized games like exclusives is t that outlandish obviously nowhere near that in rt
 

FireFly

Member
Why are you still doing this? Do you just like to argue?

You do realize the PS5 GPU only has 4MB of L2 cache right? The XSX has 5MB. Like this whole thing is just crazy to me. You are talking about all these other things out there and ignoring the PS5 that is on the market RIGHT NOW. Like your stupid example you just gave of the RX6400 which has more L3 cache than the 10TF PS5 has TOTAL cache. smh...
The RX 6400 only has 128.0 GB/s, so the extra cache makes sense there. The low end AMD cards target 1080p or below, where the cache hit rates are higher, so even a small amount of L3 cache is of benefit. On the other hand consoles target higher resolutions and die space is at a premium, so it makes more sense to go for a bigger memory bus, if possible. However, neither Nvidia nor AMD have gone with a 384-bit bus on their midrange parts, and based on what Microsoft has said, it seems that it is tough to achieve in a ~300 mm^2 chip. So I don't think there is an easy solution to the bandwidth issue, either way.
 

Mr.Phoenix

Member
Dude, chill out. Just because someone doesn't agree with you, doesn't mean they are out for you or something.
If you don't want to talk tech, then why did you bother quoting me in the first place. Yes, because it was you who quoted me first, twice.
Very chill. I love talking tech, not a fan of mindless drawn-out arguments which is what this has become though. I can't say what I have to say any other way. So its best we just agree to disagree and move on.
Yes the PS5 GPU has 4MB of L2, but it also has 448GB/s. And even with that, it requires things like Anisotropic filtering to be at low values, because the console is limited by memory bandwidth. Something that is almost free on any PC GPU in the last decade.
But if we get a PS5 Pro with only 18Gbps, it will not be enough. And the 8MB of L2 will do very little for hit rates, so the GPU will still have to go plenty of times to memory.
You even ignore the official graph from AMD that I posted about cache size vs hit rates.
Ok, gonna try this for the last time.

Yes the PS5 has 4MB and 448GB/s. The PS5pro can have anywhere between 512GB/s (if using 16gbs chips) or 576GB/s (18gbs chips). The "leaks" claims 18gbs, so let's go with 576GB/s. Now to maintain a similar efficiency as the base PS5, a 56CU PS5pro only needs 2.16MB more L2 cache, a 60CU PS5pro needs 2.6MB more. How you calculate this, divide 4MB by 36CU and you get 0.11MB of L2 cache available to a fully loaded PS5. Thats 0.11MB/CU. So 56CU needs 6.1MB and 60CU needs 6.6MB.

So even going as far as saying the PS5pro GPU L2 cache would be increasing from 4MB to 8MB, considering what Sony is already doing with the PS5... is actually a stretch. But its logical.

And I ignored the graph you posted because that has nothing to do with a PS5... as there is a PS5 with actual real-world data to make our own assessments. Case in point, we can all see exactly what the PS5 is capable of doing with 448GB/s and 4MB of GPU L2 cache.
And stop pretending that a console has some magic that makes it different from what a PC GPU has in memory bandwidth.
There is a reason why consoles have to reduce graphical options that are heavy on memory bandwidth.
But that's exactly it. A console does have some magic that makes it different from PCs. Its called optimization or... console-optimized settings. Which brings me (again) to the other point. You NEED TO UNDERSTAND what the PS5pro is doing and where its starting from.

The PS5 is not taking games that are struggling to hit 1080p while running at 30fps and trying to make them do 4K@60fps. No. Its taking games running at 1440p@ an average of 40fps and taking those games to an average of 60fps. That is ALL it's doing. Coupled with better RT and better reconstruction. But it's using the exact same texture and or geometry presets the PS5 uses in its quality mode.

Now ask yourself a simple question, how much more tech over the PS5 do you think is needed to do that while making a console that would retail for under $600,
 
Last edited:
The fact that AMD's utterly incompetent and generations behind Nvidia doesn't mean it won't be a dissapointment though. That statement doesn't inherently imply or suggest he believes it will be around a 4080 (it absolutely will not), it means it should and is technically possible.

Again, the 4080 will be over 2 years old and is actually a XX70 series die. It is objectively true that it will be a dissapointment if a 2 year newer and expected premium priced console cannot come close to a 2 year old 70 series GPU. That is like the minimum people should be expecting and demanding, and is historically what modern consoles have achieved (rough parity with the prior gen 70 series cards).
I think people assume you mean the 4080 in it’s entirety that includes rt and not just the rasterization capabilities
 

winjer

Gold Member
Very chill. I love talking tech, not a fan of mindless drawn-out arguments which is what this has become though. I can't say what I have to say any other way. So its best we just agree to disagree and move on.

Ok, gonna try this for the last time.

Yes the PS5 has 4MB and 448GB/s. The PS5pro can have anywhere between 512GB/s (if using 16gbs chips) or 576GB/s (18gbs chips). The "leaks" claims 18gbs, so let's go with 576GB/s. Now to maintain a similar efficiency as the base PS5, a 56CU PS5pro only needs 2.16MB more L2 cache, a 60CU PS5pro needs 2.6MB more. How you calculate this, divide 4MB by 36CU and you get 0.11MB of L2 cache available to a fully loaded PS5. Thats 0.11MB/CU. So 56CU needs 6.1MB and 60CU needs 6.6MB.

So even going as far as saying the PS5pro GPU L2 cache would be increasing from 4MB to 8MB, considering what Sony is already doing with the PS5... is actually a stretch. But its logical.

And I ignored the graph you posted because that has nothing to do with a PS5... as there is a PS5 with actual real-world data to make our own assessments. Case in point, we can all see exactly what the PS5 is capable of doing with 448GB/s and 4MB of GPU L2 cache.

That is not how any of this works. You can't just make up data like that. Especially when companies like AMD and NVidia have published data for GPU cache hit rates.

But that's exactly it. A console does have some magic that makes it different from PCs. Its called optimization or... console-optimized settings. Which brings me (again) to the other point. You NEED TO UNDERSTAND what the PS5pro is doing and where its starting from.

The PS5 is not taking games that are struggling to hit 1080p while running at 30fps and trying to make them do 4K@60fps. No. Its taking games running at 1440p@ an average of 40fps and taking those games to an average of 60fps. That is ALL it's doing. Coupled with better RT and better reconstruction. But it's using the exact same texture and or geometry presets the PS5 uses in its quality mode.

Now ask yourself a simple question, how much more tech over the PS5 do you think is needed to do that while making a console that would retail for under $600,

Optimization in a console for memory bandwidth bottlenecks means reducing graphics, like having low AF, low resolution alpha effects, etc.
 
And I'm sure AMD could do that if the TDP of the RTX 4080 was 215W like the 2070S...but it's rated at 320W, almost 50% higher. There's a reason new architectures take around 24 months to develop and not 12 months like at the beginning of the 8th generation of consoles. Look at how out of whack the performance scaling of the 4090 vs 4080 is compared to their SM count. Now do the same with the 4080 and 4070.

If Sony says "fuck that" and doesn't mind a huge, power hungry monster, then RTX 4080/7900 XTX in rasterization might be be feasible (at a large premium and I actually didn't check if they ever had an APU like that) but I somehow doubt that the Pro will be pulling 300W.

Never mind that Turing sacrificed die space for tensor and RT cores.
Now this is a more reasonable response than just name calling and it’s appreciated I agree here I don’t expect the pro to have the raw hardware grunt of a 4080 I just think it’s possible in very specific games that are highly specialized for the console like ps exclusives that it may perform very similar to the card aka punch above its weight
 

Mr.Phoenix

Member
That is not how any of this works. You can't just make up data like that. Especially when companies like AMD and NVidia have published data for GPU cache hit rates.



Optimization in a console for memory bandwidth bottlenecks means reducing graphics, like having low AF, low resolution alpha effects, etc.
Make up data?

Ok.. I give up.

Again, let's just agree to disagree. Nice chat...
 

Gaiff

SBI’s Resident Gaslighter
The higher end 40 series are all comically overvolted in an effort to justify the price hikes and product stack shift. This isn't stupid or surprising as 99.9% of people buying $1300+ desktop GPUs obviously couldnt give less of a fuck about power consumption/efficiency. Undervolting to sane levels sees the 4080 deliver 95% of the performance at around 230W or so on the heaviest gaming loads.
That's highly contingent upon silicon lottery. I had a 4090 that crashed with a 90% power limit. Not every card reacts the same to undervolting and you cannot have this degree of uncertainty with consoles. Additionally, this goes for Lovelace which is far more power efficient than RDNA3. 320W is the 4080. The 7900 XTX which is close is 355W and the undervolting/power limiting isn't as good as with Lovelace.

But even a 4080 in rasterization is a lot. I would bet on something closer to a 7800XT or 4070.
For RT, it's a whole different matter.
It's like people don't realize the 4080 is still 50% faster than the 3080 at 4K. Anyone expecting the PS5 Pro to outperform a 3080/6800 XT in anything by over 40% is off their rocker.
 
It's like people don't realize the 4080 is still 50% faster than the 3080 at 4K. Anyone expecting the PS5 Pro to outperform a 3080/6800 XT in anything by over 40% is off their rocker.

Not sure who keeps pulling out the 4080 comparisons, I'm guessing it's that dude who doesn't know how to quote but I have him muted.

Anyways yes I think anyone who thinks the Pro will come close to 4080 in rasterisation performance is nuts, the 4080 is a monster but a ridiculously overpriced monster. I always said that had Nvidia retailed the card at $999, it would have killed RDNA 3 on arrival.
 
Pls take the time to read what is being said and not just read to argue.

The more you talk the more I wonder if you even understand exactly what the cache's role is and what RAM is for.

But hey, I said lets agree to disagree.

RDNA3 was released this year. Not two years ago. THIS YEAR.

smh...

I've been telling that to my wife for years, Mate. It just doesn't work with some people!
 

Mr.Phoenix

Member
It's like people don't realize the 4080 is still 50% faster than the 3080 at 4K. Anyone expecting the PS5 Pro to outperform a 3080/6800 XT in anything by over 40% is off their rocker.
I find it ridiculous that people are expecting it to outperform the GPUs I am expecting it to perform like. Sorry, not even expecting... hoping. Hoping that VOPD allows it to have raster performance on par with a 6800XT and RDNA4 3rd gen RT allows it to have RT performance on par with a 3080.

Hoping so much that I am even afraid to say it out loud... then boom, some people here talking about 4080...
 
Just on a side note, I had a discussion with someone who specialises in graphics, hardware and programming on a professional level (I don't want to go into more detail than that). I asked him about the hardware acceleration for AI on RDNA 3, as i know there's been several debates regarding this subject on Gaf, some have argued its a fully dedicated AI core like Tensor cores in Nvidia cards, others have argued it's repurposed shader units designed to run ML code.

Anyways this was his response :

"It has ML Acceleration HW, but it is directly integrated into the stream processors, so it can't run at the same time as regular FMA code. This differs compared to Nvidia Tensor Cores, which can run independent but require the same scheduler as the other pipelines, so it can't issue regular FMA instructions at the same time. This also differs from a Neural Processing Unit, which is near fully independent, and only the main CPU thread needs to pass an independent thread to it."

Make of that what you will, but it should give us an interesting insight into some of the potential AI capabilities of the Pro in regards to potential upsampling technology.
 
Last edited:

Mr.Phoenix

Member
Just on a side note, I had a discussion with someone who specialists in graphics, hardware and programming on a professional level (I don't want to go into more detail than that). I asked him about the hardware acceleration for AI on RDNA 3, as i know there's been several debates regarding this subject on Gaf, some have argued its a fully dedicated AI core like Tensor cores in Nvidia cards, others have argued it's repurposed shader units designed to run ML code.

Anyways this was his response :



Make of that what you will, but it should give us an interesting insight into some of the potential AI capabilities of the Pro in regards to potential upsampling technology.
That could explain why AMD still doesnt use AI for FSR3 or have an AI based version of it. We can only hope that with RDNA4 or the PS5pro its allowed a little more autonomy.

However, I doubt that being that they can do just fine with FSR3 without AI. If anything we may have a higher chance of the AI units getting hacked out of the PS5pro CU entirely.
 

ChiefDada

Member
I think people assume you mean the 4080 in it’s entirety that includes rt and not just the rasterization capabilities

I do believe people are downplaying optimization and rdna 3.5 architectural uplift for PS5 Pro but I have seen no evidence even from rumored specs to suggest matching 4080 raster in any hypothetical circumstance. Maybe try to explain the basis of your reasoning so we understand your prediction better.
 

SABRE220

Member
There seems more likely. 3080/4070 levels in very few titles and even so, this is being optimistic as hell.
Okay lets not go into the other extreme matching a 3080 isn't some lofty crazy expectation and is hardly being optimistic as hell, it would be the bare minimum performance delta requirement to justify the pro console. That being said even close to matching a 4080 in raster even with dedicated optimization is simply not a believable scenario.
 

Fafalada

Fafracer forever
Make of that what you will, but it should give us an interesting insight into some of the potential AI capabilities of the Pro in regards to potential upsampling technology.
Mobile RDNA3 already has separate AI accelerators that are pretty much GPU independent. Whether that has sufficient throughput for being useable in running realtime workloads is another matter, but anyway that's off the shelf chips.


That could explain why AMD still doesnt use AI for FSR3 or have an AI based version of it. We can only hope that with RDNA4 or the PS5pro its allowed a little more autonomy.
I mean - frame synthesis literally has decades of research in it, it's one area where AI is more than a little superfluous - NVidia did it as a side-effect of their push, not because of any actual necessity.
It's very different to resolution reconstruction where almost all relevant progress happened in the past 10-15 years, and almost half of that is AI influenced.

Anyway - if PS5Pro is Cerny led like predecessor was, they will have very custom flavours of things in there though (whether it's AI, RT or just rasterisation pipeline extensions). To date there's still things PS4Pro introduced that stayed exclusive to PS platforms.
 

winjer

Gold Member
That could explain why AMD still doesnt use AI for FSR3 or have an AI based version of it. We can only hope that with RDNA4 or the PS5pro its allowed a little more autonomy.

However, I doubt that being that they can do just fine with FSR3 without AI. If anything we may have a higher chance of the AI units getting hacked out of the PS5pro CU entirely.

Intel's XeSS 1.2 running in DP4A, is a great argument for AI, even on RDNA2.
It does run ~10% slower than FSR2, but image quality is so much better, that I think it's worth it.
And this is Intel's code, so it's not optimized for AMD GPUs. If AMD had something similar, they could optimize the code for their own GPUs. In RDNA3, it could work particularly well with support for WMMA instructions.
 

Mr.Phoenix

Member
Intel's XeSS 1.2 running in DP4A, is a great argument for AI, even on RDNA2.
It does run ~10% slower than FSR2, but image quality is so much better, that I think it's worth it.
And this is Intel's code, so it's not optimized for AMD GPUs. If AMD had something similar, they could optimize the code for their own GPUs. In RDNA3, it could work particularly well with support for WMMA instructions.
I think this whole reconstruction thing was a massive ball dropped by both Sony and MS. Especially Sony. With the PS4pro, Sony was way ahead of the ideology that rendering native resolutions was just wasteful. And I applauded them for their initiative to build custom CBR accelerating hardware into the PS4pro.

Then in 2018 when DLSS came to market, I thought that was the writing on the wall that the industry as a whole was going to take reconstruction tech seriously. Well, the industry did, but sony, MS and AMD dropped the ball here.

This is 2023... five years removed from DLSS and 7 years from PS4pro custom hardware (even though that wasn't that involved), and AMD still doesn't have a proper hardware-based AI reconstruction method, when reconstruction has become the most prevalent graphical feature in game design since we switched from sprites to polygons. That is just mind-boggling.

Its shockingly bad. Every decade or so, game design and or hardware makes one massive leap forward. Sprites to polygons and shaders to physical-based rendering to RT and reconstruction....etc. Right now, on the two single biggest rendering methods defining game design in this current phase, AMD is non-existent. That even Intel making their first discrete GPU did a better job in RT and reconstruction than AMD would have been enough for me to fire the whole R&D department at AMD.

And as for Sony and MS, they should have their own standardized reconstruction tech built into their SDK. Like Playstation super resolution (PSR) or something like that. The same way I believe that in the same way, we have havok or chaos physics middleware, there should be an RT lighting middleware that covers reflections, GI, AO, and shadows.

But AMD though, those guys are just certifiably useless in the GPU space. sometimes I almost wish that next-gen Sony should switch to Intel or even use an Nvidia GPU... though dealing with Nvidia is like dealing with the devil.
 

Gaiff

SBI’s Resident Gaslighter
I think this whole reconstruction thing was a massive ball dropped by both Sony and MS. Especially Sony. With the PS4pro, Sony was way ahead of the ideology that rendering native resolutions was just wasteful. And I applauded them for their initiative to build custom CBR accelerating hardware into the PS4pro.

Then in 2018 when DLSS came to market, I thought that was the writing on the wall that the industry as a whole was going to take reconstruction tech seriously. Well, the industry did, but sony, MS and AMD dropped the ball here.

This is 2023... five years removed from DLSS and 7 years from PS4pro custom hardware (even though that wasn't that involved), and AMD still doesn't have a proper hardware-based AI reconstruction method, when reconstruction has become the most prevalent graphical feature in game design since we switched from sprites to polygons. That is just mind-boggling.

Its shockingly bad. Every decade or so, game design and or hardware makes one massive leap forward. Sprites to polygons and shaders to physical-based rendering to RT and reconstruction....etc. Right now, on the two single biggest rendering methods defining game design in this current phase, AMD is non-existent. That even Intel making their first discrete GPU did a better job in RT and reconstruction than AMD would have been enough for me to fire the whole R&D department at AMD.

And as for Sony and MS, they should have their own standardized reconstruction tech built into their SDK. Like Playstation super resolution (PSR) or something like that. The same way I believe that in the same way, we have havok or chaos physics middleware, there should be an RT lighting middleware that covers reflections, GI, AO, and shadows.

But AMD though, those guys are just certifiably useless in the GPU space. sometimes I almost wish that next-gen Sony should switch to Intel or even use an Nvidia GPU... though dealing with Nvidia is like dealing with the devil.
In Sony's defense, I don't think they anticipated for third-party games to have such massive cuts in resolution. Sony's first-party titles generally don't have much of a problem with this, often having resolution in Performance Mode being 1440p and Quality Mode 1800p-2160p. They could do without reconstruction and still look good.

It's when you see games such as Jedi Survivor that drop to ~600p in Performance Mode that makes you scratch your head. Horizon Zero Dawn is 1920x2160 on the PS4 Pro. You'd think with the much more advanced and powerful PS5, games could look substantially better at a significantly higher resolution (1440p+) but we're in a weird world where games do look better but at a much, much lower native resolution.

I'm unsure if developers just aren't as good as they used to be (a premise I've seen posited by industry insiders) or that it would have been just as bad with the previous generation had the CPUs been up to snuff and capable of 60fps and above.
 

winjer

Gold Member
I think this whole reconstruction thing was a massive ball dropped by both Sony and MS. Especially Sony. With the PS4pro, Sony was way ahead of the ideology that rendering native resolutions was just wasteful. And I applauded them for their initiative to build custom CBR accelerating hardware into the PS4pro.

Then in 2018 when DLSS came to market, I thought that was the writing on the wall that the industry as a whole was going to take reconstruction tech seriously. Well, the industry did, but sony, MS and AMD dropped the ball here.

This is 2023... five years removed from DLSS and 7 years from PS4pro custom hardware (even though that wasn't that involved), and AMD still doesn't have a proper hardware-based AI reconstruction method, when reconstruction has become the most prevalent graphical feature in game design since we switched from sprites to polygons. That is just mind-boggling.

Its shockingly bad. Every decade or so, game design and or hardware makes one massive leap forward. Sprites to polygons and shaders to physical-based rendering to RT and reconstruction....etc. Right now, on the two single biggest rendering methods defining game design in this current phase, AMD is non-existent. That even Intel making their first discrete GPU did a better job in RT and reconstruction than AMD would have been enough for me to fire the whole R&D department at AMD.

And as for Sony and MS, they should have their own standardized reconstruction tech built into their SDK. Like Playstation super resolution (PSR) or something like that. The same way I believe that in the same way, we have havok or chaos physics middleware, there should be an RT lighting middleware that covers reflections, GI, AO, and shadows.

But AMD though, those guys are just certifiably useless in the GPU space. sometimes I almost wish that next-gen Sony should switch to Intel or even use an Nvidia GPU... though dealing with Nvidia is like dealing with the devil.

Just one thing to consider. The advantage that modern upscalers have today is that they are based on temporal information.
Spatial upscalers are very limited in comparison. Things like DLSS1, FSR1, Lazcos, NIS, etc, don't hold up.

And in this matter of temporal upscalers, the company that made the big push was EPIC, with TAAU in UE4.19, released in 2018.
It took until 2020 for NVidia to catch up with DLSS 20 and even longer for AMD and Intel.
Today, Epic's TSR is a great solution for upscaling without using AI. Already better than FSR2.2

Sony is strange in the sense that they have some studios that already have a decent temporal upscaler. For example, Insomniacs IGTI.
They could have shared the tech with more studios and implemented it in more Sony games.

MS had some tech demos of AI upscaling, a few years ago, before the release of the Series S/X. But it seems they never did anything with it. A shame really.

I think AMD had good reason to go for not having dedicated hardware for RT and AI in RDNA2, as this was developed mostly for consoles. And in consoles, die space is at a premium.
Unlike a PC dedicated GPU, a console SoC has to have not only the GPU, but also the CPU, IO controllers, memory controllers, caches, etc.
On Ampere, the AI cores account for ~10% of the chip. And RT cores account for ~25%. This is great for performance, but it takes a lot of space.
Doing this on a PS5 chip would mean the console would have only 5-7TFLOPs. So it makes sense to have a hybrid solution for consoles.
What doesn't make sense, is that RDNA3 is doing the same thing. RT is still being done in the TMUs and AI is still being done in the shaders.
Yes, there are new instructions for both cases, that improve performance. But it's nowhere as good in performance and efficiency as the dedicated units that Intel and Nvidia have.
From the rumors we have, RDNA4 will fix these things, but AMD is lagging a lot.

Intel does have a huge problem with their drivers and shader efficiency.
Just consider that the A770 has 20 TFLOPs of compute, slightly lower than a 6800XT. But it performs closer to a 6600XT.
The A770 has a die size of 406 mm² on N6. Compared to a 6800XT that has a die size of 520 mm², on N7, but doesn't use the full chip.
So Intel has a good chunk of catching up to do with Battlemage.
 
Last edited:
Top Bottom