Shmunter
Member
Just going on DF vid by Alex Battlestarglia, no 1st hand experience.Is this really the case? I tried the UE5 city demo at 4K native Epic setting and my CPU utilization was paltry like its always been at high resolutions.
Just going on DF vid by Alex Battlestarglia, no 1st hand experience.Is this really the case? I tried the UE5 city demo at 4K native Epic setting and my CPU utilization was paltry like its always been at high resolutions.
Only on PS5 with Tecmo Games.Can ultra fast SSDs improve jiggle physics?
That's art directionOnly on PS5 with Tecmo Games.
Why only PS5? I can just store the boobs on RAM.Only on PS5 with Tecmo Games.
Wait Tecmo does play on PC as well! Xbox doesn't get to see the jiggles from Tecmo.Why only PS5? I can just store the boobs on RAM.
Can't you enhance the nipples with the power of the cloud?Wait Tecmo does play on PC as well! Xbox doesn't get to see the jiggles from Tecmo.
Tecmo wants no part of Xbox these days.Can't you enhance the nipples with the power of the cloud?
You are still focusing on setup. I'm not talking about that. I'm talking about the light loop. There is nothing streamed in for iterating through lights and evaluating shader materials. That is, by far, the most expensive code in a render (realtime or offline).The render equation in almost anything 3D can be replaced with simple or complex pre-calculation and lookups IMO, like lightmaps, shadowmaps, cubemaps, and so maybe BVH structures - if the foreground frustum cascade chunk can be reduced to a base BVH with a smaller diff BVH for the dynamically moving parts positions/orientations - can also be pre-calculated,for streaming, and if the camera position and orientation can also be known in advance, then even the result of the BVH traversal could be pre-calculated and streamed in too, as if it were dynamic.
You are completely missing the point. I was talking about the PC gamers not needing to worry about having an SSD for any games due to slow load speeds of game levels. They will be fast enough such that people won't need to complain. If the PS5 can load a level in 1 sec and PC players get load speeds of 20s, no PC gamer is going to care when after that level is loaded they are getting better FPS, better quality graphics, and higher resolutions.Didn't Nsomniac claimed clearly that R&C is just possible on ps5 hardware? Even via Twitter they replied to the straight question adding that who said otherwise just hadn't a clue how really work. Forgive me but I'm leaning more to believe to them than some tech guys theoryze on the net.
I fully appreciate you writing the pseudo code out to illustrate the exact part of the algorithm you believe can't be pre-calculated, but even looking through and understand your point, I would still argue that even the "random" parts can be faked/cheated with pre-calculated cyclical lookups, say using a Manderlbrot Julia set image as the source of the random value - because even well seeded generators in computer programs aren't random - at all, deep down - because computation is fully deterministic if you just unroll the algorithm.You are still focusing on setup. I'm not talking about that. I'm talking about the light loop. There is nothing streamed in for iterating through lights and evaluating shader materials. That is, by far, the most expensive code in a render (realtime or offline).
for (each light)
{
if (ray tracing)
{
compute random ray direction from a hemisphere equation; // this can not be streamed
for (each ray computed)
{
cast a ray into the scene and determine : any constants + diffuse color + (specular color * fresnel factor) * normalization term; // this can not be streamed
if (global illumination)
{
cast a ray from previous ray-hit; // this can not be streamed
compute random ray direction for occlusion and bounced light; // this can not be streamed
compute : ambient occlusion + RT GI bounce; // this can not be streamed
}
}
}
}
This very simple algorithm (without any refraction, caustics, hair materials, procedural textures, etc..) has to be computed on a pixel/sub-pixel level after all the streaming is in memory.
Disagree. According to our local experts, we can just load everything in to the PC's massive pool of memory.
Point of note, not necessarily related to your conversation. Ram/vram needs to be ultra fast because it is a computation target. To render a scene, read/write operations on the ram are in the millions per frame by the cpu/GPU. Loading an asset into ram from secondary storage is a one off operation and detached from computational requirements.If there is enough memory that would be faster than any SSD.
Even an average DDR4 3200 kit has a memory bandwidth of 51 GB/s. This is more than any SSD on the market.
And if we go to DDR5, these values increase a lot.
Now I'm just talking about system ram. If we talk about vram, these values go much higher.
But the greatest different is in access times. SSD's have access times in the mili seconds. DRam has access times in the nano seconds. That is two orders of magnitude greater.
If you are arguing that an SSD is faster than ram or vram, then you are very, very wrong.
In fact there are games where we can disable texture streaming and force everything into memory.
With UE all it takes is the command line -NOTEXTURESTREAMING
Not to mention storing assets in ram that you may not see for a long time or ever is horrendously wasteful. And initial load times suffer too copying mass data upfront instead of on demand.The amazing thing of having an SSD that fast, is that you can swap between two fully realized worlds in an instant.
Of course, you could do the trick having 32GB of RAM and not touching the SSD. But since no dev will ever make a game assuming that, we come back to the standard 8/16 GB setup.
Here, you could have 2 or 3 half empty worlds for the trick, or you could have an ssd to swap between any fully featured worlds you want, so thats why R&C only works on SSDs, because they are nit sacrificing world detail to do any trick.
The amazing thing of having an SSD that fast, is that you can swap between two fully realized worlds in an instant.
Of course, you could do the trick having 32GB of RAM and not touching the SSD. But since no dev will ever make a game assuming that, we come back to the standard 8/16 GB setup.
Here, you could have 2 or 3 half empty worlds for the trick, or you could have an ssd to swap between any fully featured worlds you want, so thats why R&C only works on SSDs, because they are nit sacrificing world detail to do any trick.
Play Gran Turismo 7 on PS5 for a week with those 2 second loading times...then try to go back to the 20/30 second loading times from the PS4 version and tell me people don't care about loading times again...I highly doubt some actually care for loading times over graphics/performance. Especially in a fraction of a second in loading times, hence MY comment, and the comment I was replying to... I'd take a fraction of a second to have better graphics and performance, and I guarantee over 90% of Gaf will agree, no fanboyism would be needed to paint the obvious picture.
DirectStorage still relies on the weakest link of the chain.I would be surprised if the 4.8 GB/s I/O Direct Storage is capable of currently without GPU decompression couldn't run Ratchet and Clank on PC.
I'm not missed the point. You clearly said R&C was feasible on all platform. Then you retreat.You are completely missing the point. I was talking about the PC gamers not needing to worry about having an SSD for any games due to slow load speeds of game levels. They will be fast enough such that people won't need to complain. If the PS5 can load a level in 1 sec and PC players get load speeds of 20s, no PC gamer is going to care when after that level is loaded they are getting better FPS, better quality graphics, and higher resolutions.
In the case of R&C, they can easily put 2-3 levels in PC memory to allow instant teleporting back and forth. The game only teleports in one direction (i.e. 2 very small linear levels back and forth).
Nobody sane cares about 1 vs 2 sec of loading times. Or even 4 vs 10 seconds like with Elden Ring. The only difference that is actually impactful is the one between last gen and this gen consoles, where the delta can be up to one minute.I highly doubt some actually care for loading times over graphics/performance. Especially in a fraction of a second in loading times, hence MY comment, and the comment I was replying to... I'd take a fraction of a second to have better graphics and performance, and I guarantee over 90% of Gaf will agree, no fanboyism would be needed to paint the obvious picture.
Sane? Really now?Nobody sane cares about 1 vs 2 sec of loading times. Or even 4 vs 10 seconds like with Elden Ring. The only difference that is actually impactful is the one between last gen and this gen consoles, where the delta can be up to one minute.
Or to rephrase, nobody who actually just wants to play video games.Sane? Really now?
And based on what?Or to rephrase, nobody who actually just wants to play video games.
Sane? Really now?
Bernd Lauert never said:They called me mad, and I called them mad, and damn them, they outvoted me
There is a way to reduce loading on Bloodborne. It's called git gudAgreed , playing Elden Ring and barely having enough time to let out a fart before it loads after I die is amazing. If only we could get a blood borne update . That game needs faster load time.
There is a way to reduce loading on Bloodborne. It's called git gud
![]()
Seriously I agree with you on loading times and Bloodborne. It fucking sucks, more so after being spoilt by the quick PS5 Demon's Souls loading.
That's just not going to happen. Some things can be cheated like procedural noise texture lookups, etc.. but we aren't moving more and more towards doing alot of precalculation - we are moving away from it more and more. It would be a nightmare pipeline for artists and it removes any kind of "natural" computation to happen in any analytical way (i.e. the rendering equation) with reasonable results for dynamic scenes. Besides, pre-computation's biggest weakness is accuracy. Storing lookups will take an enormous amount of memory in order to get even feasibly "OK" approximations that analytical solutions just don't suffer from.I fully appreciate you writing the pseudo code out to illustrate the exact part of the algorithm you believe can't be pre-calculated, but even looking through and understand your point, I would still argue that even the "random" parts can be faked/cheated with pre-calculated cyclical lookups, say using a Manderlbrot Julia set image as the source of the random value - because even well seeded generators in computer programs aren't random - at all, deep down - because computation is fully deterministic if you just unroll the algorithm.
Cheating well enough - for the untrained gamer eye - has been a staple part of 3D games since the start, where discrete pre-calculated trig tables samples were used instead of using trig functions, despite the user view point orientation and direction needing to appear to be from any possible random quaternion.
I'm not saying I currently have the answers to pre-calculate parts or all of your algorithm computational hotspots, but the industry will find ways to use 2-3 frame check-in, massive IO with massive decompression, along with pre-calculation to cheat in ways we don't expect - if history of gaming is anything to go by.
Edit:
If you come back to me and say that the amount of pre-calculated data needed to fake a certain part of an algorithm - even when heavily compressed - would exceed the data size of 200GB, then I would concede that in this generation IO isn't going to be suited to substituting that computation, but even then I still wouldn't rule out refactoring that data into smaller pre-calculated intermediate data with some computation to replace the high complex computation.
Well I don't own last gen or next gen consoles, so I never had those long loading times, so that never affected me. But I definitely get your point, coming from last gen consoles.Play Gran Turismo 7 on PS5 for a week with those 2 second loading times...then try to go back to the 20/30 second loading times from the PS4 version and tell me people don't care about loading times again...
Exactly. That was the big change in things. But several people in here are thinking PC was like last gen consoles, which is the furthest thing from the truth. We had short load times in every game, for the past decade or so.Nobody sane cares about 1 vs 2 sec of loading times. Or even 4 vs 10 seconds like with Elden Ring. The only difference that is actually impactful is the one between last gen and this gen consoles, where the delta can be up to one minute.
I hear what you are saying, and as an industry we are moving away from offline lightmass type pre-calculation but probably because the results aren't known to be reliable prior to an overnight calculation, so if there is an error the time is wasted. But pre-calculating data or results that might only take 1second at most to render on a PS5/XsX, so you can easily observe in real-time the identical scene running on more powerful graphics workstation wouldn't be a problem for the art pipeline, and Lumen's incremental lighting already seems to be a step in that direction already IMO.That's just not going to happen. Some things can be cheated like procedural noise texture lookups, etc.. but we aren't moving more and more towards doing alot of precalculation - we are moving away from it more and more. It would be a nightmare pipeline for artists and it removes any kind of "natural" computation to happen in any analytical way (i.e. the rendering equation) with reasonable results for dynamic scenes. Besides, pre-computation's biggest weakness is accuracy. Storing lookups will take an enormous amount of memory in order to get even feasibly "OK" approximations that analytical solutions just don't suffer from.
But the greatest different is in access times. SSD's have access times in the mili seconds. DRam has access times in the nano seconds. That is two orders of magnitude greater.
You sold your PS5 DE already - let me guess, Bloodborne, than straight onto Ebay? The moment was so fleeting it's like you never even had one:Well I don't own last gen or next gen consoles, so I never had those long loading times, so that never affected me. But I definitely get your point, coming from last gen consoles.
Exactly. That was the big change in things. But several people in here are thinking PC was like last gen consoles, which is the furthest thing from the truth. We had short load times in every game, for the past decade or so.
If there is enough memory that would be faster than any SSD.
Even an average DDR4 3200 kit has a memory bandwidth of 51 GB/s. This is more than any SSD on the market.
And if we go to DDR5, these values increase a lot.
Now I'm just talking about system ram. If we talk about vram, these values go much higher.
But the greatest different is in access times. SSD's have access times in the mili seconds. DRam has access times in the nano seconds. That is two orders of magnitude greater.
If you are arguing that an SSD is faster than ram or vram, then you are very, very wrong.
In fact there are games where we can disable texture streaming and force everything into memory.
With UE all it takes is the command line -NOTEXTURESTREAMING