• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Nvidia is considering 16 generated frames for the future of DLSS

winjer

Member

NVIDIA says that they can achieve up to 16 frames per second with future DLSS MFG, and if that benefits the gamers, they may improve the MFG to such a level. As per the Q&A, it seems like NVIDIA wants to settle for three frames per second at the moment since it's already multiplying the FPS several times.
The multi-frame generation technique uses the latest transformer model and adds 'fake' frames to the gameplay to increase the smoothness of gameplay. By analyzing the scenes carefully through an advanced AI model, the current DLSS 4 technology is much more advanced than it was initially. NVIDIA has showcased the performance difference between native and DLSS 4 MFG scenarios, which can increase the fps by up to eight times.
If NVIDIA adds 16 fake frames in between the original ones, it might be just too much and may introduce weird AI artifacts, distortions, or anomalies to the visuals. Also, fake frames still won't be able to compete with the quality of original frames and won't offer a similar smooth performance. Three fake frames look quite sufficient for now, but of course, the future of DLSS and MFG depends on what NVIDIA thinks is better for gamers.

Walking Glancing GIF
 

Neilg

Member
Stupid article. those quote is
"In the end, it is all about balance. If we can generate 16 frames at a time, and if it ultimately benefits the gaming experience, we will do so, but for now, we have decided that generating a maximum of 3 frames is appropriate."


they can theoretically go up to 16, but don't need to, and have no plans to.
 

lestar

Member
the ai better be really smart to read your mind to anticipate your inputs to avoid the massive input lag
 

readonly

Neo Member
The guy literally says "If we can generate 16 frames at a time, and if it ultimately benefits the gaming experience, we will do so". So what is the problem?
 

Gaiff

SBI’s Resident Gaslighter
Q. With NVIDIA's new DLSS 4 core technology, MFG, it will now generate multiple frames instead of just one. This time, it can generate up to three frames. Is there a goal for how many frames can be generated by AI in the future?

A. This DLSS 4 MFG technology is aimed at 4K 240Hz. It is difficult to say for sure what the technical limitations are in terms of how many more frames can be generated, or what the future holds. In the end, it is all about balance. If we can generate 16 frames at a time, and if it ultimately benefits the gaming experience, we will do so, but for now, we have decided that generating a maximum of 3 frames is appropriate.


They’re not even considering it. Why the inflammatory thread title?
 

LiquidMetal14

hide your water-based mammals
I'm not going to be the one to criticize. I care about the implementation and not some reaction of other people to pile.

I'm always going to cheer advancements in hardware and software and this is no different. Yes is it annoying that we have to adapt and getting used to less raw raster performance just because of demands of real-time Ray tracing? Of course. Otherwise I don't see how this can't be just appreciated as it's just an optional thing anyway. And truthfully, it's dlss2 we have been in a very good place overall. And FSR continues to make it's own strides.
 

deeptech

Member
You all will soon be used to it, and you will beg for more frames generated :p

Latency this latency that
 

kevboard

Member
I mean, imagine you're running 120fps natively and add frame gen to that to max out the increasingly high refresh rates of modern monitors 🤷

I feel like that is a good idea as it should increase motion clarity even with fake frames.
500hz monitors are out already,
 

Buggy Loop

Member
They better find a way to bring that added latency to near zero otherwise

unimpressed michael keaton GIF

They kind of did or on their way to

Reflex 2

An input from user would hijack the whole pipeline to warp and fill it up so that input latency as at minimum even with multi frame. It’s a matter of time they combine both.

Peoples don’t understand how the pipeline works. Reflex 1 was already quite clever but reflex 2 is just insane.
 

cormack12

Gold Member
I still don't get who this is targeted at. The majority of PC gamers who are buying these Nvidia cards at these prices are likely to be performance sensitive to this kind of stuff. And if they're not, wtf are they buying high end Nvidia cards. I feel this is being marketed as like when you could unlock disabled AMD shader cores or use ATIFlash to unlock the RAM on the early RX480s and get a decent performance bump.

THE FINALS achieves 56ms of latency. With Reflex Low Latency, latency is more than halved to 27ms. And by enabling Reflex 2, Frame Warp cuts input lag by nearly an entire frametime, reducing latency by another 50% to 14ms.

All that effort to get a little better than something like Game Motion+ on Samsung TVs? I wouldn't advocate any of these motion engines being switched on in LLMs on TVs and I really don't get this feature except it acting as a placebo for those who want to see justification in higher numbers.
 

Dorago

Member
DLSS from 2019 didn't help, and this will be even worse. They aren't competent programmers, they can't become competent programmers, no amount of "helping hand" will ever be able to fix cargo cult work.
 

salva

Member
I just want hard locked 60fps @ 4k with full RT, path tracing, max graphic settings in any game.
 

kevboard

Member
They better find a way to bring that added latency to near zero otherwise

unimpressed michael keaton GIF

additional generated frames do not add to the latency.
it doesn't matter if you generate 1 frame between 2 real frames or 16. the time between the real frames doesn't increase, which means the latency also doesn't increase above what a single generated frame already adds.

if you run at 60 real frames per second, and you turn on frame gen with 1 generated frame, your latency will be roughly that of that same game just running at 60fps (better than that due to Nvidia Reflex).
now you add 16 generated frames to that instead of 2, and your latency is still about that of that game running at 60 real frames per second.

being at 60 native fps vs 60fps with frame gen is usually the difference between 50ms of lag and 66ms of lag. often lower than that in both cases with reflex enabled tbh.
and that's an amount of input lag that is less than that of 99% if console games.
 
Last edited:

hyperbertha

Member
Just generate 100% of the frames. Input lag is a problem, but one that can be solved by letting the GPU generate the inputs as well. You can just start the game, sit back and relax. It doesn't get more cinematic than that.
You need to generate only 24 fps for proper cinematic experience. Should be possible with frame gen 3.0 and 1440p.
 

Knightime_X

Member
Let's be real. It's very early technology.
Sure, some people will dislike it now, but later on when it advances more, their tune will change.
Kind of like everyone saying "Boycott modern warfare 2" but everyone is playing (and loving it) anyways.
 

ReyBrujo

Member
Highest hertz monitors at 4k are 240hz. To be maxing the refresh rate you would be running at 15fps internally 🤢

I totally see game developers working at 20fps and let DLSS or similar frame generation system join the dots.

Sure, some people will dislike it now, but later on when it advances more, their tune will change.

Exactly, same as digital only future and gaas. It's just a matter of when and not if,
 
Last edited:

mèx

Member
Let's be real. It's very early technology.
Sure, some people will dislike it now, but later on when it advances more, their tune will change.
It was the same with DLSS upscaling.

With die shrinking getting more challenging and expensive as we go onwards, it's only natural that GPU manufacturers try to develop new solutions to bridge the performance gap.

While not perfect currently, DLSS FG is still pretty decent as long as you hit 60+ FPS.
As with every tool, it must be implement by the devs and used by the users properly for it to work as close to optimal as possible.
 

Holammer

Member
120hz televisions are already the norm. Eventually it'll be 360-480 and beyond. That's when framegen will shine.
You're not going to render 480fps native on the latest and greatest AAA stuff.
 
120hz televisions are already the norm. Eventually it'll be 360-480 and beyond. That's when framegen will shine.
You're not going to render 480fps native on the latest and greatest AAA stuff.
yeah meng
if you get ~120fps input lag, but see 1000+ fps...

200w.gif


thatll be crt-level motion clarity
maybe better

now we just need 2000hz+ displays
 
Last edited:

Lethal01

Member
I still don't get who this is targeted at. The majority of PC gamers who are buying these Nvidia cards at these prices are likely to be performance sensitive to this kind of stuff. And if they're not, wtf are they buying high end Nvidia cards. I feel this is being marketed as like when you could unlock disabled AMD shader cores or use ATIFlash to unlock the RAM on the early RX480s and get a decent performance bump.



All that effort to get a little better than something like Game Motion+ on Samsung TVs? I wouldn't advocate any of these motion engines being switched on in LLMs on TVs and I really don't get this feature except it acting as a placebo for those who want to see justification in higher numbers.

I want max raytracing at a base 60fps, frame gen is a nice bonus past that.
 

Lethal01

Member
But it doesn't feel smooth if there is latency. You really won't have the reactivity that's part of a smooth game.

I want it to look smooth, 60fps games feel smooth enough for me, I hace played at very high fps, I know the difference of low latency, it doest matter to me, i value the game looking smooth.
 
Top Bottom