• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Digital Foundry - Upscaling Face-Off: PS5 Pro PSSR vs PC DLSS/FSR 3.1 in Ratchet and Clank Rift Apart

PaintTinJr

Member
PSSR, like DLSS and XeSS before it, are all examples of a convolution neural network. And by regression rate you referring to gradient descent? They all do it. Its how the AI model is trained. And not even gonna go into inference since that's simply how the model fills in the gaps to create the higher rez image... which again, they all do. Some better than others no doubt, but like with DLSS, it will always get better.

Obviously, some people will or can do it better than others, but eventually, they all will arrive at the same place as long as the hardware is their to run it. As I said, ultimately, AI reconstruction is a brute-force process done on the training end of things. At some point, the model improves to the point where everything is doing the exact same thing. Hw do you think Intel made such a good showing in their first attempt?

I don't need to throw out words to obfuscate or confuse people on the forum, I prefer to keep things as basic as possible so everyone reading it would understand. So don't assume that because I didn't use certain words it means I have no idea what I am talking about.
No the multi regression modelling is what you decide to use as input to generate outputs to infer changes in the ML AI reconstruction. It is what changes something from being ML AI to just a reconstruction algorithm like Lanscoz.

A neural net design strategy to handle false negatives and false positives massively impacts a solution's effectiveness, as this is where the solution is at risk of bias. Equally the the choices of inputs at the node level - the designer's effective hypothesis and algorithm to solve the reconstruction problems- and also at the data level feeding the multi-regression - the designer's observation of the source data and what aspects of the data inform the nodes - and that's not even taking into account constraints on a solution for feasibility of operating in real-time or any other decisions to use non ML-AI processing of the data in the solution, so to suggest they all converge and it is just a brute force problem just isn't true.

Has weather modelling resulted in all facilities predicting it around the world converging to just one prediction shared by all?

As the modelling grows the results may look very similar, but the means by which each solution achieves that can be vastly different and they aren't the same because the solutions don't permanently remain in lockstep.
 

HeWhoWalks

Gold Member
Do you even pay attention?

[PS5 Pro] Early experience report. A next-generation gaming experience that will impress even PC gamers, a console packed with technology aimed at hardcore gamers
PlayStation Senior Principal Product Manager, Toshi Aoki

"PSSR" is one of the main attractions. Please tell us about its development process and what makes it stand out.

Aoki: Our main goal was to create an upscaling system that would satisfy game developers.

 Although it was jointly developed with AMD, we spoke with Mark Cerny and the game team to decide on the algorithm and hardware design.

 Each upscaler on the market has its own preferences and characteristics, but we started from aiming for a level that would satisfy developers when playing at 60fps, knowing that even if this upscaler doesn't produce native 4K, it will be able to deliver the expression they were trying to achieve. So we focused on that and developed it through a lot of trial and error with AMD and the game team.


Which is what PS hardware leaker Kepler has been saying for a long time now.

Thennnnnnnn, back to my original point, PSSR is doing a great job according to the critics, so what is the issue? “The hardware it’s utilizing” is about as far as you got. What about said hardware?
 
Last edited:

PaintTinJr

Member
Nvidia already tried having a game specific upscaler. And it was a complete failure.
And one of it's main problems would be to have to train an AI for every single game. That would make things very expensive.
What DLSS and XeSS and very likely, PSSR are doing is using the temporal algorithm do the upscaling and then have the ML part clean up artifacts and polish the image.

And don't forget that Nvidia is the world leader in AI solutions, by a gigantic margin.
As yet we have no confirmation that PSSR scales in the same way that XeSS/DLSS does

As for the training for every game, that comes down to how the solution has been partitioned into nodes. You could design a game agnostic solution like DLSS/XeSS and then refactor the nodes into smaller nodes, with the core of the model having the biggest nodes being feed by more and more progressively smaller nodes the closer the nodes get to having an input or inputs that you want the developer to be able to override for their game - if needed - so that just those smaller nodes needed re-trained for their game by the developer - with the developer overridden values - to produce exterior nodes that override the default ones.

But that is what is happening now with FSR, where devs can tweak values for things like the reactive mask, exposure, etc.
There is no way to have devs tweak the ML component, because that would mean tweaking weights. And that would mean retrain the model with every change.
The model can be partitioned to achieve what you want it to achieve, but that's the difficultly. More flexibility with minimal redundant data re-training is the difficult of the AI design task to provide a superior solution. I'm not saying it is easy, but is possible, and fits into Sony's 78years of specialising in AV signal processing.
 
Last edited:

Elios83

Member
For the time being DLSS surely has a stability advantage, and looks like it could be producing sharper results. But when it comes to the latter it's hard to tell for sure because sharpness setting in Ratchet don't match, and AW2 pictures are sourced differently.
On the other PSSR would appear to better handle aliasing, and has a more convicing fur rendering along with, in some ways, vegetation.

pu0uWqg.gif


PSSRvs-DLSS-A.gif


PSSRvs-DLSS-5.gif


RXrGBye.gif

Here a DLAA shot showing the missing branches with DLSS are supposed to be there.

Ratchet-PS5-PRO-6.gif


Ratchet-Fur.gif


Ratchet-PS5-PRO-4.gif


While the above fur shots are not direct comparisons with DLSS, it's currently impossible on PC to get rid of this kind of dithered fur edges PSSR is resolving.

Thanks for these comparisons, PSRR seems to have some advantages even compared to DLSS.
Didn't expect that. And things are bound to be improved with time.
I really wonder how much input Sony gave to AMD and how much PSRR is anticipating FSR4 and how much it is its own thing.
 

DenchDeckard

Moderated wildly
I remember all the comments about comparing PS5 and Xbox series X and to what level of Zoom we will need to go to, to see a difference.

I never expected it to get this insane!
 

Mr.Phoenix

Member
No the multi regression modelling is what you decide to use as input to generate outputs to infer changes in the ML AI reconstruction. It is what changes something from being ML AI to just a reconstruction algorithm like Lanscoz.

A neural net design strategy to handle false negatives and false positives massively impacts a solution's effectiveness, as this is where the solution is at risk of bias. Equally the the choices of inputs at the node level - the designer's effective hypothesis and algorithm to solve the reconstruction problems- and also at the data level feeding the multi-regression - the designer's observation of the source data and what aspects of the data inform the nodes - and that's not even taking into account constraints on a solution for feasibility of operating in real-time or any other decisions to use non ML-AI processing of the data in the solution, so to suggest they all converge and it is just a brute force problem just isn't true.

Has weather modelling resulted in all facilities predicting it around the world converging to just one prediction shared by all?

As the modelling grows the results may look very similar, but the means by which each solution achieves that can be vastly different and they aren't the same because the solutions don't permanently remain in lockstep.
Ok. Thanks for this, I get what you are saying now. And I concede that my take on this was a little too generalized.
 

Loxus

Member
Thennnnnnnn, back to my original point, what is Sony’s critical error here? “The hardware it’s utilizing”. What about said hardware? Don’t post some link while you dodge the question.
A few are assuming Sony did AI/ML stuff on their own but that's not the case as Sony collaborated again according to Sony themselves.

I said this multiple times but if you were paying attention, you would of released it.

I don't even know wtf you're talking about with that Sony’s critical error. What error are you even talking about?
 

HeWhoWalks

Gold Member
A few are assuming Sony did AI/ML stuff on their own but that's not the case as Sony collaborated again according to Sony themselves.
I’m only gonna tackle this since the rest is irrelevant. You asked why Sony would use what they are using when AMD has their own solution. I said because it was collaborative and it’s working. Where did that get lost in translation? Legitimate question. If I misread you, that’s on me.
 
Last edited:

Javi97

Member
For the time being DLSS surely has a stability advantage, and looks like it could be producing sharper results. But when it comes to the latter it's hard to tell for sure because sharpness setting in Ratchet don't match, and AW2 pictures are sourced differently.
On the other hand PSSR would appear to better handle aliasing, and has a more convicing fur rendering along with, in some ways, vegetation.

pu0uWqg.gif


PSSRvs-DLSS-A.gif


PSSRvs-DLSS-5.gif


RXrGBye.gif

Here a DLAA shot showing the missing branches with DLSS are supposed to be there.

Ratchet-PS5-PRO-6.gif


Ratchet-Fur.gif


Ratchet-PS5-PRO-4.gif


While the above fur shots are not direct comparisons with DLSS, it's currently impossible on PC to get rid of this kind of dithered fur edges PSSR is resolving.
People expect PSSR to be at the level of DLSS2 but it seems like it is closer to DLSS3 and even has small victories over it.
 

Javi97

Member
That's extremely impressive. I didn't think they would be able to get close to DLSS but in some cases shown in those zoomed images I actually prefer the look of PSSR. Others I prefer DLSS, but what I take from this is PS5 Pro is a game changer for console image quality. The difference in image quality between PS5 and PS5 Pro are going to be a lot more dramatic than I initially expected.

It's a damn shame that the bloody thing is £700 without a disc drive. These comparisons would have pushed me to get one if the drive was included. I can't justify £800 for it though. I'd rather hang on to the PS5 until maybe next generation.
$600 would have been a fair price for the console even if it doesn't include the disc drive
 
$600 would have been a fair price for the console even if it doesn't include the disc drive
Yeah I probably would have impulse bought it at that price. £800 including disc drive is just a little much for me personally. It probably is great for those that have no interest in PC gaming and are already well invested within the eco system.
 

PaintTinJr

Member
For the time being DLSS surely has a stability advantage, and looks like it could be producing sharper results. But when it comes to the latter it's hard to tell for sure because sharpness setting in Ratchet don't match, and AW2 pictures are sourced differently.
On the other hand PSSR would appear to better handle aliasing, and has a more convicing fur rendering along with, in some ways, vegetation.

pu0uWqg.gif


PSSRvs-DLSS-A.gif


PSSRvs-DLSS-5.gif


RXrGBye.gif

Here a DLAA shot showing the missing branches with DLSS are supposed to be there.

Ratchet-PS5-PRO-6.gif


Ratchet-Fur.gif




While the above fur shots are not direct comparisons with DLSS, it's currently impossible on PC to get rid of this kind of dithered fur edges PSSR is resolving.
The difference in fur is like comparing real-time rendered fur in 3D blender to offline rendered fur, which certainly backs up interview comments about PSSR inferencing different objects with specific training.

The quality of the fur here via PSSR is insane, and would be like a 3min-10min RTX accelerated quality for a single frame. in 3D blender, I reckon the ML AI saving is at least 10,000 times versus real-time native rendering on an RTX 4090.
 
Last edited:
Top Bottom