• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

DLSS 4 (the new Super Resolution and Ray Reconstruction models) will be compatible with ALL RTX GPUs

The best way to play on a 4090 and 4K screen is DLDSR 1.78x and DLSS Performance mode. It is too good. Essentially performance mode would render at 1440p, get's upscaled to DLDSR resolution and then back down to 4K. IQ is noticebly better than standard 4K DLSS perf/bal/quality modes and frame rate should be around 70-80fps.
 
Last edited:

Rossco EZ

Member
Looks like that smooth motion feature is coming to 40 series in the future too. Just needs to be tested more apparently

NVIDIA told us that support for the RTX40 series GPUs will be coming in a future update.

“NVIDIA Smooth Motion is a brand-new driver technology and requires time for validation and QA across multiple products. Support for GeForce RTX 40 Series GPUs will be coming in a future update.”
 
Last edited:

Kenpachii

Member
Looks like that smooth motion feature is coming to 40 series in the future too. Just needs to be tested more apparently

NVIDIA told us that support for the RTX40 series GPUs will be coming in a future update.

Men this 4080 laptop keeps giving, can't wait to use it in older games.
 

yamaci17

Member
thank you waiting your video.
so to not make it seem like game runs flawlessly or anything, I ran the game in day (the game gets higher frames at night usually). it is quite bad, I'd say ray tracing with this CPU would be considered unplayable for many. I'd play because I have tolerance for anything above 30. I mean it just stutters, with or without ray tracing. with ray tracing though, stutters are a bit worse.

hogwarts and open world sections seem fine with ray tracing enabled but of course you still get huge stutters and slow downs between areas. but 76 fps was an odd outlier/specific location (I didn't mean to lol). overall performance is 45-60 FPS for castle and free roam, and 36-45 FPS for hogsmeade and alike.

so no, I wouldn't recommended it if you plan to enable ray tracing. and dualsense features do not work sadly



it goes without saying, it performs somewhat better without recording
 

Aaron07088

Neo Member
so to not make it seem like game runs flawlessly or anything, I ran the game in day (the game gets higher frames at night usually). it is quite bad, I'd say ray tracing with this CPU would be considered unplayable for many. I'd play because I have tolerance for anything above 30. I mean it just stutters, with or without ray tracing. with ray tracing though, stutters are a bit worse.

hogwarts and open world sections seem fine with ray tracing enabled but of course you still get huge stutters and slow downs between areas. but 76 fps was an odd outlier/specific location (I didn't mean to lol). overall performance is 45-60 FPS for castle and free roam, and 36-45 FPS for hogsmeade and alike.

so no, I wouldn't recommended it if you plan to enable ray tracing. and dualsense features do not work sadly



it goes without saying, it performs somewhat better without recording

thanks for the video. I can play games with 30 fps its not a problem but all these stutters and no dualsense support im sad. This game not deserve my money thank you

Edit: Whats cost the recording? i think its like %5 %6 right?
 
Last edited:

PaintTinJr

Member
Yeah… that’s CNN upscalers, since DLSS 2 released in 2020
So have you got a link to where DLSS2+ is doing it that way?

Pretty sure all these AI models when asked say otherwise, as that is stated as a difference, but even if they are all wrong and DLSS2+ CNNs are doing that, they'll be paying Cerny's patent license fee for the privilege to use that method.
 
Last edited:

marjo

Member
I thought you guys were exaggerating a bit, but I just tried Alan Wake 2 and can confirm that, on my system at least, the new model at performance looks significantly more detailed than the legacy one at quality. This is at an output resolution of 1440p. I'm not sure if the bulk of the improvements are coming from the changes to super sampling or ray-reconstruction.
 

yamaci17

Member
thanks for the video. I can play games with 30 fps its not a problem but all these stutters and no dualsense support im sad. This game not deserve my money thank you

Edit: Whats cost the recording? i think its like %5 %6 right?
yes, give or take. game stutters mostly due to lack of VRAM if you ask me. their texture streamer is really horrible. I actually had to put textures to low and ray tracing disabled to get rid of most of the stutters.
otherwise with any other texture setting other than low, game stutters no matter your settings. and at low texture quality option, the texture streamer only gets 1.2 GB which ends up with poor textures overall.
 
Last edited:

Boo Who?

Member
Indiana Jones and the Great Circle looks and runs much better on my 4070 and 14th gen i7. Turning Frame Gen on lowered the framerate though. :messenger_tears_of_joy:
 

Bojji

Member
Indiana Jones and the Great Circle looks and runs much better on my 4070 and 14th gen i7. Turning Frame Gen on lowered the framerate though. :messenger_tears_of_joy:

Goes out of vram probably. I had the same problem in some games when I had 4070 and 4070ti.

Nvidia released feature that increases "framerate" but it was unusable on many of their 4xxx gpus because they had insufficient vram capacity...
 

Buggy Loop

Member
Got my 5090 today, okay im a believer, MFG is insane.

hkdGVbO.png
 

Boo Who?

Member
Goes out of vram probably. I had the same problem in some games when I had 4070 and 4070ti.

Nvidia released feature that increases "framerate" but it was unusable on many of their 4xxx gpus because they had insufficient vram capacity...
That makes sense. It doesn't take much to run out of VRAM on that game.
 
Installed the new driver, but my custom resolutions are rip, and dlss override tab says unsupported on games i have.

b31eeccfd8c954b52c0eb40d8e916df8.png

Figured this out last night and yeah it's super lame. I have 11 DLSS games installed right now and only one (Deep rock galactic) doesn't show "Unsupported".

Is it even really an "override" if it needs special support or patches or whatever to work? It's really just another option at that point.
 

Mithos

Member
For overrdrive supported games yes, for unsupported gotta at minimum swap the file and possibly use inspector too if K isnt new default (ive written how a few posts above)
This regedit lets you see what dlss preset youre running if you want to check (works globally, dont forget to turn it off)
Wished you could toggle this ON/OFF while ingame instead of registryedits before and after starting/closing a game.
But will have to work for now to see if the override "takes".
 

PaintTinJr

Member
Holy shit you have no idea what you’re talking about

« for what PSSR is already doing more intelligently. PSSR starting with a full-sized native image with holes to lower the pixel count is the best algorithm. »

Yeah… that’s CNN upscalers, since DLSS 2 released in 2020

« Don’t listen to Nvidia buzzwords, but listen to Cerny… »
Saw you have posted some time ago since my last reply to you, and just wondered if your are intending to reply so the heavy language bolded part you put out there can be clarified on this specific discussion point.

Genuinely interested to know if Nvidia's DLSS is using a CNN technique that's the main part of Cerny's patent on how PSSR works differently.
 

Buggy Loop

Member
Saw you have posted some time ago since my last reply to you, and just wondered if your are intending to reply so the heavy language bolded part you put out there can be clarified on this specific discussion point.

Genuinely interested to know if Nvidia's DLSS is using a CNN technique that's the main part of Cerny's patent on how PSSR works differently.

I'm at work. I'll get back to you.
 
Does Hogwarts have a separate toggle for RR? Because this murders Turing and Ampere. A drop of over 30% in Cyberpunk at 4K DLSS P. You have to make do without RR if yes.

You were right on the money with disabling ray reconstruction on a 3090, back up to a locked 60 again. You take a hit on the reflections as they don't look as sharp, tho still less noisy than before so WB must have updated them, and I was able to bump up the DLSS so a lot of the aliasing from the performance mode was reduced. Balanced DLSS now looks really good, Hogwarts Legacy looks much clearer too compared to the previous version of DLSS they were using.
 
Last edited:

Buggy Loop

Member
Go read Cerny's patent and listen to his recent technical video, or search the marketing explanations of how the Bravia X1 chips or newer XR processors work.

Bravia XR's whole neuron marketing is basically AI image detection. Oh an human, I recognize that. Will human watching TV look at human center of big screen and I should clear up around for better focus? Color grading, focus, highlights, etc.

TV source material has no motion vectors and they can live with high latencies that would never work with a game. There's a reason these are typically disabled for gaming mode.

Video processing would leverage pixels because that's all it gets. Games get access to much more data from sampling graphics. In graphics we can take accurate point sample functions with more data available in that sample like HDR value, exposure, subpixel offset, depth. etc.

Video processing would use optical flow, and it works because they can have access to much more frames in the past and future because it is not latency dependant, they don't have this in games. Games use geometry motion vectors and they cannot afford to to stack so much future/past frames like video processing.

So really not much to take from gaming upscaling vs a TV processing unit and after watching Cerny's presentation I honestly don't get where you want to go with this. What you can take away from Bravia XR is super resolution... which we get to down below..

PSSR starting with a full-sized native image with holes to lower the pixel count is the best algorithm.

That's the very definition of super resolution upscaling, as Cerny says and employs the same wording as the rest of the industry.

3ONiUyH.jpeg


Then you fill in the gaps as Cerny says.

Ya1eCBV.jpeg


You can do it the dumb way and fill in the holes by copying/averaging the values from the closest pixels (nearest neighboy upscaling), but that will look like shit. Then you can say oh well, why not bilinear / bicubic upscaling algorithm, but that still won't look good enough and smears the image. Algotrithms stop there for the most part on the thesis that you cannot recover missing data by further processing.

That's where neural network comes in. Cramming millions of high quality images in a convoluted neural network and breaking it down into smaller and smaller subsamples (feature extraction) until you get to the fully connected neural network where convolution will happen and then you'll be able to fill in the data for output with micro details samplings.

PSSR, XeSS, DLSS do the same thing. Some slight recipe and parameters but its the same recipe. None of them truely invented the fundamental tech here, not even Nvidia. Prior papers on all this were developed in universities pretty much. What Nvidia gets credit for is the first to make it a use case for real-time game rendering. The training model is the secret recipe, not so much the super resolution part.

So that's the super resolution part

But games don't work like upscaling a .jpg, and none of the above solutions dumbly reconstruct every holes / every gaps frame by frame. That's not efficient, that's slow.

The spatial temporal part --→ Game Movement.

All of them use the TAA framework

All the last decade solutions are on the same baseline, they use frames in a pipeline as frame N and N+1, sometimes N-1, rarely you'll have more than 3 frames in a pipeline, technologies like, checkerboard, Temporal upsampling, PSSR, TAA, ATAA, DLSS, XeSS, FSR

6lj5uzo.jpeg


But that checkerboard example is a cute neat way to resolve that would need almost interlaced frame N & Frame N+1 and that's not what happen in games of course so that's why it got much more sophisticated and the traditional spatial-temporal upsampling leverage heuristics to identify invalid samples from previous frames.

Now what they are all trying to achieve here? A hack of supersampling of course.

Supersampling is too expensive so then they had MSAA which limited number of pixels with multiple samples only around the edges of geometry, but basically anything not on the edge of a triangle was ignored so no improvement to transparency or internal texture detail.

TAA converts spatial averaging of supersampling into temporal average.

The same is true for all modern upscalers. But by using the framework of TAA, frame to frame and motion vectors, you'll suddenly have a bunch of gaps filled naturally with the next frame and almost as neat as a checkerboard interlacing, but not all gaps. Still what this means is that you just filled a very large portion of holes from frame-to-frame.

So why the fuck AI ?

Each frame in TAA renders at 1 sample per pixel, 1spp.
But TAA has an history problems from with fine geometry and complex lighting/denoising. They all do.

sopMML4.jpeg


So the mask above is by PSSR patent's wording what they refer to as "bigger holes" that AI CNN takes over to correct. Much like super resolution, AI is better placed to fill in missing information than algorithms.

The above image is from Nvidia's ATAA. They implemented the mask to detect these failure points and then focus with rendering in these areas, but still an heuristic solution. For this solution Nvidia knocked at Microsoft's door for implementing conservative rasterization circa 2016~17.

DLSS CNN started because of this project at Nvidia which was initially just to repair photos.



Here at the DLSS 2 presentation they refer to it @ 0:45



So they realized they can fix the above issues with CNN model from that dude's .jpg repair tool.

DLSS uses the same masks as ATAA along with conservative rasterization from DirectX (and then Vulkan later). I don't think I can actually share the pdf of DLSS programming guide as it is confidential as of 27 Jan 2025's last update but I'm sure you can find it by a nice google search. From clause 3.6.4 of the DLSS programming guide they explain it well the continuation of the above solution. If you are rendering objects with thin geometric features they tend to get in and out of view due to the low resolution of the input buffer (full of gaps and mutiple frames means in/out) and the rasterizer missing out on some parts of that geometry. These are holes, much like the above image from ATAA.

So, those missing features, how do you reconstruct it? They are missing. They have no motion vectors nor color, its truely a gap. Without any further help, DLSS will incorrectly associate the previous frame of that object with the motion vector of the background, so rather than trying to mend the holes in the object, those holes persist.

So to help DLSS reconstruct, basically laser focus AI into the interesting regions, because again, they are not dumbly looking at every pixels and AI it, the TAA framework already did most of the grunt work, they use conservative raster (from ATAA implementation), because it ensures that if a primitive touches a pixel even a slight amount, it gets drawn, with motion vector. Then DLSS can reconstruct that hole much better than previous ATAA and heuristic solutions. Same as Cerny's paper explanation of the machien learning inference process to fill a hole with higher quality fill.


So CNN network models do not look at every pixels rendered and reconstruct the neighbours for a full screen. I saw some of your posts from the past where you think PSSR is so smart because it uses AI only when it needs to do it.... yea, they do that.


They use AI CNN to fix the failures of Native+TAA like their previous attempt with ATAA. They saw that fixing failures such as what is masked from frameN to frame N+1 in TAA is better filled out by the photo editor CNN model.

I also saw one of your claim from before - "the final image will actually have at least 1/4 of the output resolution rendered natively (so 1080p's worth of pixels in a 4K output) rather than 100% predicted from a lower mipmap like DLSS/XeSS"

The resolution part is already detailed previously in my post, but the mipmap part is false also, DLSS CNN is NOT designed to enhance texture resolution, its not supposed to turn low resolution textures into high resolution ones. Texture Mip bias in the engine integration should be set so textures are sampled to have the same resolution as native rendering. DLSS CNN is trying to fix temporal failures like the above image. In the pipeline it is after the input (Geomtry/Shading @ ex: 1080p) → DL Upsampling (ex: 4k) → Post processes like Mipmap bias , tonemap, depth of field, motion blur, blood.., etc. This is detailed by Edward Liu, NVIDIA GTC 2020.


To close
As of just a few weeks ago



"A CNN processes pixel information through local operations spatially around a small number of neighboring pixels and temporarily across multiple frames"

Which is pretty much TL: DR of what I detailed above.

And all that just went out the window with transformer upscaler which I have no fucking clue how they do it.

As per video, they bring "reason" to image detection, use self attention, longer range pattern across a much larger pixel window.

so that means to mean that rather than focusing on a tile like CNN models do, it can look at a much broader view and has pattern recognition, maybe not even having to point fingers with masks, but that's something I am not sure about, too new. Something to learn in the future.

Will probably ask a graphic engine programmer I know who was also behind big AAA studio hits with their inhouse engines and participated a lot with Nvidia in the past in implementations to explain to me like I'm five what the transformer model is doing because I know he's already fully going through the SDK and implementation along with talking with his other nerd colleagues.
 
Last edited:

Aaron07088

Neo Member
yes, give or take. game stutters mostly due to lack of VRAM if you ask me. their texture streamer is really horrible. I actually had to put textures to low and ray tracing disabled to get rid of most of the stutters.
otherwise with any other texture setting other than low, game stutters no matter your settings. and at low texture quality option, the texture streamer only gets 1.2 GB which ends up with poor textures overall.
I saw benchmarks vram is a problem but dont solve every problem. Game still stutter when enable ray tracing if you have mid range cpu like 12400f 5600 or 5700x
 

PaintTinJr

Member
Bravia XR's whole neuron marketing is basically AI image detection. Oh an human, I recognize that. Will human watching TV look at human center of big screen and I should clear up around for better focus? Color grading, focus, highlights, etc.

TV source material has no motion vectors and they can live with high latencies that would never work with a game. There's a reason these are typically disabled for gaming mode.

Video processing would leverage pixels because that's all it gets. Games get access to much more data from sampling graphics. In graphics we can take accurate point sample functions with more data available in that sample like HDR value, exposure, subpixel offset, depth. etc.

Video processing would use optical flow, and it works because they can have access to much more frames in the past and future because it is not latency dependant, they don't have this in games. Games use geometry motion vectors and they cannot afford to to stack so much future/past frames like video processing.

So really not much to take from gaming upscaling vs a TV processing unit and after watching Cerny's presentation I honestly don't get where you want to go with this. What you can take away from Bravia XR is super resolution... which we get to down below..



That's the very definition of super resolution upscaling, as Cerny says and employs the same wording as the rest of the industry.

3ONiUyH.jpeg


Then you fill in the gaps as Cerny says.

Ya1eCBV.jpeg


You can do it the dumb way and fill in the holes by copying/averaging the values from the closest pixels (nearest neighboy upscaling), but that will look like shit. Then you can say oh well, why not bilinear / bicubic upscaling algorithm, but that still won't look good enough and smears the image. Algotrithms stop there for the most part on the thesis that you cannot recover missing data by further processing.

That's where neural network comes in. Cramming millions of high quality images in a convoluted neural network and breaking it down into smaller and smaller subsamples (feature extraction) until you get to the fully connected neural network where convolution will happen and then you'll be able to fill in the data for output with micro details samplings.

PSSR, XeSS, DLSS do the same thing. Some slight recipe and parameters but its the same recipe. None of them truely invented the fundamental tech here, not even Nvidia. Prior papers on all this were developed in universities pretty much. What Nvidia gets credit for is the first to make it a use case for real-time game rendering. The training model is the secret recipe, not so much the super resolution part.

So that's the super resolution part

But games don't work like upscaling a .jpg, and none of the above solutions dumbly reconstruct every holes / every gaps frame by frame. That's not efficient, that's slow.

The spatial temporal part --→ Game Movement.

All of them use the TAA framework

All the last decade solutions are on the same baseline, they use frames in a pipeline as frame N and N+1, sometimes N-1, rarely you'll have more than 3 frames in a pipeline, technologies like, checkerboard, Temporal upsampling, PSSR, TAA, ATAA, DLSS, XeSS, FSR

6lj5uzo.jpeg


But that checkerboard example is a cute neat way to resolve that would need almost interlaced frame N & Frame N+1 and that's not what happen in games of course so that's why it got much more sophisticated and the traditional spatial-temporal upsampling leverage heuristics to identify invalid samples from previous frames.

Now what they are all trying to achieve here? A hack of supersampling of course.

Supersampling is too expensive so then they had MSAA which limited number of pixels with multiple samples only around the edges of geometry, but basically anything not on the edge of a triangle was ignored so no improvement to transparency or internal texture detail.

TAA converts spatial averaging of supersampling into temporal average.

The same is true for all modern upscalers. But by using the framework of TAA, frame to frame and motion vectors, you'll suddenly have a bunch of gaps filled naturally with the next frame and almost as neat as a checkerboard interlacing, but not all gaps. Still what this means is that you just filled a very large portion of holes from frame-to-frame.

So why the fuck AI ?

Each frame in TAA renders at 1 sample per pixel, 1spp.
But TAA has an history problems from with fine geometry and complex lighting/denoising. They all do.

sopMML4.jpeg


So the mask above is by PSSR patent's wording what they refer to as "bigger holes" that AI CNN takes over to correct. Much like super resolution, AI is better placed to fill in missing information than algorithms.

The above image is from Nvidia's ATAA. They implemented the mask to detect these failure points and then focus with rendering in these areas, but still an heuristic solution. For this solution Nvidia knocked at Microsoft's door for implementing conservative rasterization circa 2016~17.

DLSS CNN started because of this project at Nvidia which was initially just to repair photos.



Here at the DLSS 2 presentation they refer to it @ 0:45



So they realized they can fix the above issues with CNN model from that dude's .jpg repair tool.

DLSS uses the same masks as ATAA along with conservative rasterization from DirectX (and then Vulkan later). I don't think I can actually share the pdf of DLSS programming guide as it is confidential as of 27 Jan 2025's last update but I'm sure you can find it by a nice google search. From clause 3.6.4 of the DLSS programming guide they explain it well the continuation of the above solution. If you are rendering objects with thin geometric features they tend to get in and out of view due to the low resolution of the input buffer (full of gaps and mutiple frames means in/out) and the rasterizer missing out on some parts of that geometry. These are holes, much like the above image from ATAA.

So, those missing features, how do you reconstruct it? They are missing. They have no motion vectors nor color, its truely a gap. Without any further help, DLSS will incorrectly associate the previous frame of that object with the motion vector of the background, so rather than trying to mend the holes in the object, those holes persist.

So to help DLSS reconstruct, basically laser focus AI into the interesting regions, because again, they are not dumbly looking at every pixels and AI it, the TAA framework already did most of the grunt work, they use conservative raster (from ATAA implementation), because it ensures that if a primitive touches a pixel even a slight amount, it gets drawn, with motion vector. Then DLSS can reconstruct that hole much better than previous ATAA and heuristic solutions. Same as Cerny's paper explanation of the machien learning inference process to fill a hole with higher quality fill.


So CNN network models do not look at every pixels rendered and reconstruct the neighbours for a full screen. I saw some of your posts from the past where you think PSSR is so smart because it uses AI only when it needs to do it.... yea, they do that.


They use AI CNN to fix the failures of Native+TAA like their previous attempt with ATAA. They saw that fixing failures such as what is masked from frameN to frame N+1 in TAA is better filled out by the photo editor CNN model.

I also saw one of your claim from before - "the final image will actually have at least 1/4 of the output resolution rendered natively (so 1080p's worth of pixels in a 4K output) rather than 100% predicted from a lower mipmap like DLSS/XeSS"

The resolution part is already detailed previously in my post, but the mipmap part is false also, DLSS CNN is NOT designed to enhance texture resolution, its not supposed to turn low resolution textures into high resolution ones. Texture Mip bias in the engine integration should be set so textures are sampled to have the same resolution as native rendering. DLSS CNN is trying to fix temporal failures like the above image. In the pipeline it is after the input (Geomtry/Shading @ ex: 1080p) → DL Upsampling (ex: 4k) → Post processes like Mipmap bias , tonemap, depth of field, motion blur, blood.., etc. This is detailed by Edward Liu, NVIDIA GTC 2020.


To close
As of just a few weeks ago



"A CNN processes pixel information through local operations spatially around a small number of neighboring pixels and temporarily across multiple frames"

Which is pretty much TL: DR of what I detailed above.

And all that just went out the window with transformer upscaler which I have no fucking clue how they do it.

As per video, they bring "reason" to image detection, use self attention, longer range pattern across a much larger pixel window.

so that means to mean that rather than focusing on a tile like CNN models do, it can look at a much broader view and has pattern recognition, maybe not even having to point fingers with masks, but that's something I am not sure about, too new. Something to learn in the future.

Will probably ask a graphic engine programmer I know who was also behind big AAA studio hits with their inhouse engines and participated a lot with Nvidia in the past in implementations to explain to me like I'm five what the transformer model is doing because I know he's already fully going through the SDK and implementation along with talking with his other nerd colleagues.

I've started reading and your first thing is to talk about Sony's Bravia XR TVs, when this is all about Nvidia's CNNs working identically to the PSSR CNNs, described in Cerny's patent of taking a resolution, positioning 1/2, 3/4, etc pixel count as holes, then rendering at native with holes and using ML AI and non-ML AI techniques to fill the holes.

Just having a quick scan look over the video thumbnails and images DLSS2+ isn't doing that, So am I going to find the exact part - that you were supposed to provide - buried in there where DLSS2+ describes doing what PSSR does back in 2020?

I'm happy to go through it all, but at a glance I'm already feeling you've tried to equate two different techniques after downplaying PSSR's unique patent algorithm as already in DLSS2+ and are now potentially using obfuscation to hide your claim being inaccurate.


Will I find the exact part? And if so, please just present that? Or is your big response pure obfuscation, as I suspect?


edit: just finished reading and watching all your stuff, and am pleased you did make the effort and are making a real argument, even if your conclusion is wrong.

I think the part where you aren't seeing it is that you aren't understanding for what unique feature the PSSR Cerny patent
was granted, patents withstanding non-uniqueness challenges is very robust,

Although you think the masking is done at native in DLSS2+ and think the holes were positioned PSSR, the material you have presented says otherwise, and the holes are a result of taking a lower native render and scaling it to output resolution, and then CNN processing it to upscaling to output native quality is finding holes, that are from under sampling subpixel details at lower resolution and scaling up, where object holes appear from aliasing of the subpixel details getting stretched.

Even in the final video at around 6mins the engineer talks about the trend of non-native pixels being rendered with DLSS as it has improved, and now only renders 1/16 pixels natively from the output.

Your first video is also at odds with the job of upscaling in games. It isn't to have no data and replace unwanted parts of images with things that are more aesthetically pleasing, but to understand what has been rendered and render it as it should have been at output native or to render it superior to output native (shallow fake), so the origins for Nvidia CNNs back 2016 were more an application for magazine picture enhancement and that seems to remain in their pre DLSS4 solution IMO.

As for the diagram for DLSS4 transformers, it looks very similar to the diagram in the Cerny patent, and that's where PSSR is also not quite the CNN vs Transformer solution Nvidia want people to believe it is. PSSR's algorithm to position holes in it native output render resolution(1/4 pixels) is heuristically done AFAIK, which is very different to DLSS2+.
 
Last edited:
I've started reading and your first thing is to talk about Sony's Bravia XR TVs, when this is all about Nvidia's CNNs working identically to the PSSR CNNs described Cerny's patent of taking a resolution positioning 1/2, 3/4, etc pixel count as holes, then rendering at native with holes and using ML AI and non-ML AI techniques to fill the holes.

Just having a quick scan look over the video thumbnails and images DLSS2+ isn't doing that, So am I going to find the exact part - that you were supposed to provide - buried in there where DLSS2+ describes doing what PSSR does back in 2020?

I'm happy to go through it all, but at a glance I'm already feeling you've tried to equate two different techniques after downplaying PSSR's unique patent algorithm as already in DLSS2+ and are now potentially using obfuscation to hide your claim being inaccurate.


Will I find the exact part? And if so, please just present that? Or is your big response pure obfuscation, as I suspect?
Your comments always make me smile. Dude, you are my favourite Playstation die hard fan 😋. No one else can replace you! Only you can pretend that less detail is better than more detail (my comparison between TAA ve DLSS in RDR2), or weaker console is faster than much stronger console (PS2 vs. Xbox).
 

V1LÆM

Gold Member
i don't see any difference in visuals or performance in cyberpunk.

do i need to use the nvidia app or can i use something like DLSS swapper to update the version? i did that with Forza but again i can't tell any difference.
 
got my 5080 today and jumped right into CP2077.

DLSS4 is absolute magic and DLSS performance will be my new standard setting going forward. I can`t stress enough just what a gigantic jump in image quality this is in personal side by side comparisons on your own screen at home. With this i´ll never look back to playing native again (especially native with TAA)

Framegen though still sucks just as much as before if you`re not hitting high framerates already (imho min 70fps) and any and all visual issues are ofc worse with x3 MFG.
 
Last edited:

geary

Member
But how different is now DLSS4 Perf vs Balance vs Quality? Is it worth to go for Balanced or Quality, if Performance looks how it looks? is the fps worth?
 

PaintTinJr

Member
It makes shadows/SSAO or whatever flicker like crazy in Jedi Survivor for me.

YiqBKP.gif
I suspect we are going to go through a painful period of games releasing with certain techniques that look fine at native hi-res but a poor fit for ML AI upscalers - before the techniques get abandoned.

Even just going by Nvidia's claim that 15 of 16 pixels aren't native on DLSS4 it is impressive enough if upscalers could do a decent job with the bushes if they were stationary with such sparse data and high frequency detail, but adding in a poor pseudo motion and those items passing over each other DLS44 is doing well to even look 360/PS3 foliage noisy in that gif IMO.
 
got my 5080 today and jumped right into CP2077.

DLSS4 is absolute magic and DLSS performance will be my new standard setting going forward. I can`t stress enough just what a gigantic jump in image quality this is in personal side by side comparisons on your own screen at home. With this i´ll never look back to playing native again (especially native with TAA)

Framegen though still sucks just as much as before if you`re not hitting high framerates already (imho min 70fps) and any and all visual issues are ofc worse with x3 MFG.
I too am really happy with DLSS4 performance. I've stopped using DLSS Quality/Balanced. I'm so much happier with getting higher frames with Perf and the IQ is still insane.

Gamechanger.

If anyone wants to force the latest preset follow this guide:


Furthermore, download DLSS swapper:

You can now play all your (DLSS compatible) games with the latest file.
 
But how different is now DLSS4 Perf vs Balance vs Quality? Is it worth to go for Balanced or Quality, if Performance looks how it looks? is the fps worth?
for me the difference between perf and balance hasn`t been visible in CP2077, in quality the picture is again a tad sharper/clearer, but the difference in motion is so miniscule to my eye that I´ll take the performance instead.
I think it will be hard to show those differences via yt video, too, as those will probably be invisible after YT compression hits. That´s gonna be one of those 40x zoom DF videos with highlights upon certain geometric structures or games with specific issues. By n large dlss4 is just absolutely amazing.
DLSS support just became a pretty big factor in all my future game purchasing decisions.
 
Last edited:

Gaiff

SBI’s Resident Gaslighter
i don't see any difference in visuals or performance in cyberpunk.

do i need to use the nvidia app or can i use something like DLSS swapper to update the version? i did that with Forza but again i can't tell any difference.
You can do that in the in-game menu is Cyberpunk. You can choose between Transformer and CNN in the graphics options.
 
But how different is now DLSS4 Perf vs Balance vs Quality? Is it worth to go for Balanced or Quality, if Performance looks how it looks? is the fps worth?

For me the upscale seemed a similar enough quality, but on Hogwarts Legacy performance introduced noticeable aliasing and bumping up the DLSS reduced it. Given how everyone is waxing lyrical here about the new performance mode it's probably game dependant at this early point, that or playing on a 50" display isn't helping to mask these things :messenger_grinning_sweat:
 

bbeach123

Member
For me the upscale seemed a similar enough quality, but on Hogwarts Legacy performance introduced noticeable aliasing and bumping up the DLSS reduced it. Given how everyone is waxing lyrical here about the new performance mode it's probably game dependant at this early point, that or playing on a 50" display isn't helping to mask these things :messenger_grinning_sweat:
100% vary from game to game ,dlss 4 preset K gave me noticeable ghosting in ff7rebirth (and trees shimmering) so I swap back to the 3.7 dlss .

Right now its not an upgrade like everyone said , its a trade off with a few glitch and problem for better image quality(sharpness and detail) .

Hope nvidia iron it out soon .
 

3liteDragon

Member
"DLSS Override - Models Preset" - "Latest"

Doing this in the nvidea app will cause the game to use the new transformer model correct?
The "latest" option in the NVIDIA app for Super Resolution is preset K, which apparently is an improved version of preset J, both presets use the transformer model. Has anyone done a comparison between the new preset K & preset J?


I've heard some say preset J is sharper and looks better but preset K has more temporal stability and handles the disocclusion artifacting that preset J introduced. I'll do some testing tomorrow and update my comment.

Edit: Preset K is extremely close to J, I only tested it in the Witcher 3 but J is only slightly sharper. Both still suffer from minor artifacting but Preset K deals with vegetation specifically in this regard MUCH BETTER. I've decided to stick with Preset K for the Witcher 3 but I suggest testing it on a per game basis as this isn't a one size fits all.

I went back to Preset A on the old CNN model which was the recommended DLSS preset when the game launched, and it genuinely hurts my eyes, whenever anything is in motion it blurs significantly; I don't even know how I played the game like this for 200 hours.
Reading the thread, sounds like Preset K is better if you're playing a game that has lots of vegetation.
 
Last edited:
The "latest" option in the NVIDIA app for Super Resolution is preset K, which apparently is an improved version of preset J, both presets use the transformer model. Has anyone done a comparison between the new preset K & preset J?



Reading the thread, sounds like Preset K is better if you're playing a game that has lots of vegetation.

I tried preset K and J on both Wukong and Ninja Garden 2 Black.

The overall image just looks better with present K with less shimmering.
 

Rossco EZ

Member
I don’t have K through the Nvidia App.

Assuming I need to use inspector and DLSS swapper?
Shouldn’t need nvidia inspector now, or at least I didn’t in my case. I’m still using DLSS swapper and adding in the latest DLSS just to be sure but once you done that just go to the nvidia app and select latest and it should use the latest preset. Tried it myself on a few games and preset K is the one that was being used.
 
Top Bottom