• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Tim Sweeney on the Tech Demo: "Nanite and Lumen tech powering it will be fully supported on both PS5 and Xbox Series X and will be awesome on both."

hyperbertha

Member
Just to be accurate about Epic China.
That is what he said.

2:06:30 If its a 1080P screen, 2 triangle per pixel, make some compression on vertex, than you still can run this demo, no need very high bandwidth and IO like PS5.
Where did you find this part? It kind of contradicts the rest : ''he said his notebook could run at 1440p 40fps+, optimization target is 60fps for next-gen''. notebook is an rtx 2080 mobile with a 970 EVO

Essentially if it's 4x less detail. Then it can run on an aforementioned notebook. Okay.
The tweet right now is unclear. It says they can run it at 1440p 40 fps in his notebook. And then we have the quote above that says it can only run it on a 1080p screen with 2 traingles per pixel.
 

FireFly

Member
Maybe I should have been more clear. Why would a DEV have a 1080p screen? And let's say that he did (if you can speculate, so can I), who's to say this wasn't connected to an external display? Just about every dev uses external screens.
True, we don't know, either way.
 

ethomaz

Banned
Where did you find this part? It kind of contradicts the rest : ''he said his notebook could run at 1440p 40fps+, optimization target is 60fps for next-gen''. notebook is an rtx 2080 mobile with a 970 EVO


The tweet right now is unclear. It says they can run it at 1440p 40 fps in his notebook. And then we have the quote above that says it can only run it on a 1080p screen with 2 traingles per pixel.
I posted the timestamp.

2:06:30

The 1440p 40fps part talked in Reddit is at 53:00.

He says that for the Lumen scene but never said the resolution... not Nanite part (https://forum.beyond3d.com/posts/2126263/).

Edit - Fixed.
 
Last edited:

Thirty7ven

Banned
Where did you find this part? It kind of contradicts the rest : ''he said his notebook could run at 1440p 40fps+, optimization target is 60fps for next-gen''. notebook is an rtx 2080 mobile with a 970 EVO


The tweet right now is unclear. It says they can run it at 1440p 40 fps in his notebook. And then we have the quote above that says it can only run it on a 1080p screen with 2 traingles per pixel.

Can I get a source on that rtx2080 with 970 evo?


I posted the timestamp.

2:06:30

The 1440p 40fps part is at 53:00.

He says that for the Lumen scene... not Nanite part (https://forum.beyond3d.com/posts/2126263/).

He doesn’t specify resolution.
 
Last edited:

ethomaz

Banned
What specific part was Lumen used for? And what specific part was Nanite used for? If I'm not mistaken the whole demo utilizes both. Not just a show case of one technology running, then a separate part for another.
The Lumen demonstration start at 5:24 in Epic stream... it is the first part of the demo.
Nanite come after.
 
Last edited:
The Lumen demonstration start at 5:24 in Epic stream.
Let's go with your interpretation of the "lumen scene". According to beyond3d forum (not sure if I can link here), post #532

A few things that I noticed from skimming through the video.
* Guy mentioned they can run the demo in the editor at 40fps, not 40+ but did not specify resolution.
* Currently Nanite has some limitations such as only works on static meshes, doesn't support deformation for animation, doesn't support skinned character model, supports opaque material but no mask.
* Lumen costs quite a bit more than Nanite
* UE5 could eventually be a hybrid renderer using both Lumen and Raytracing in the future.


If lumen costs more than Nanite, wouldn't this solidify Jensens statement about the 2080 maxQ. Which would also give more power to this demo running at 1440p.




Post #587

Usually the editor performance is not as good as in-game, so that looks even better for pc gpu performance.

Even more power to the 1440p claim.
 

ethomaz

Banned
Let's go with your interpretation of the "lumen scene". According to beyond3d forum (not sure if I can link here), post #532




If lumen costs more than Nanite, wouldn't this solidify Jensens statement about the 2080 maxQ. Which would also give more power to this demo running at 1440p.




Post #587



Even more power to the 1440p claim.
The opposite.

MaxQ is a weaker GPU than RTX 2700 so running a more taxing part of the demo at 40fps means it was not at 1440p and/or same level of detail.

That fits what the Epic guy said here:

2:06:30 If its a 1080P screen, 2 triangle per pixel, make some compression on vertex, than you still can run this demo, no need very high bandwidth and IO like PS5.

BTW Lumen rely on Compute only... CUs... Xbox should run better.
 
Last edited:
The opposite.

MaxQ is a weaker GPU than RTX 2700 so running a more taxing part of the demo at 40fps means it was not at 1440p and/or same level of detail.
BTW Lumen rely on Compute only... CUs... Xbox should run better.

Go back and read what Jensen said about his own products. Specifically the max Q variant. We can only go off from what he said, and he would know better than you or I on this matter.

No need to speculate on things that have already been confirmed by the devs themselves. It's easier to keep the thread neater that way as well.
 

yurinka

Member
Yeap it makes all sense to run at the same resolution than PS5 in a way weaker GPU and slower SSD.
And 1440p fits this comment too :messenger_tears_of_joy: .

2:06:30 If its a 1080P screen, 2 triangle per pixel, make some compression on vertex, than you still can run this demo, no need very high bandwidth and IO like PS5.

Jesen is the Epic guy in the video? If so then he is saying what I'm posting lol


I'm just curious... I believe Epic said 4 triangles per pixel on PS5 demo.
For what the China Epic guy said makes really no difference.
As I remember Tim Sweeney said that a frame of the demo had around 20M triangles.
The demo was 1440p and not sure but as I remember I think it was variable resolution.
So both numbers may be variable.
In any case, if both are fixed, 1440p (2,560 x 1,440) is 3686400 pixels, so 20M/3686400=5.43 triangles/pixel
The same scene in 1080p (1920x1080) would be 2073600, so 9.65 triangles/pixel

But I don't know, these numbers are too high. I assume the 20M includes also stuff that is not being drawn, and in the same way that Geometry Engine can make culling it may also reduce the meshes depending on the native resolution that the game is being rendered.
 
Last edited:

ethomaz

Banned
As I remember Tim Sweeney said that a frame of the demo had around 20M triangles.
The demo was 1440p and not sure but as I remember I think it was variable resolution.
So both numbers may be variable.
In any case, if both are fixed, 1440p (2,560 x 1,440) is 3686400 pixels, so 20M/3686400=5.43 triangles/pixel
The same scene in 1080p (1920x1080) would be 2073600, so 9.65 triangles/pixel

But I don't know, these numbers are too high. I assume the 20M includes also stuff that is not being drawn, and in the same way that Geometry Engine can make culling it may also reduce the meshes depending on the native resolution that the game is being rendered.
You can choose how many triangles per pixel you will use.
PS5 demo I believe used 4 triangles per pixel.
The Epic guy in the video says you can drop that to 2 triangles per pixel to run 1080p in less bandwidth intensive way.

The numbers are not high.. it uses that amount of triangle up to 1 billion or more I don’t know but it only show the number of pixels rendered... it is like supersampling.
 
Last edited:

ethomaz

Banned
Go back and read what Jensen said about his own products. Specifically the max Q variant. We can only go off from what he said, and he would know better than you or I on this matter.

No need to speculate on things that have already been confirmed by the devs themselves. It's easier to keep the thread neater that way as well.
I’m not sure what are you trying to say... RTX 2080MaxQ is weaker than RTX 2070.

images



It is a very capped GPU.
6.5TFs.

RTX 2070 is 7.5TFs.
 
Last edited:
I’m not sure what are you trying to say... RTX 2080MaxQ is weaker than RTX 2070.

images
Does it matter if it's weaker or stronger? The only thing people can go off from, is what Jensen said in relation to the 2080 max q vs next gen consoles. We haven't heard anything from AMD. Everything else is noise, and open for interpretation. Jensen would have more insider knowledge about his own hardware compared to you, no offense if any was taken.

If you would like to continue this conversation, feel free to DM me, as I don't want to be involved in the derailment of this thread. But I also don't want misinformation to be spread either.
 

Exodia

Banned
Just to be accurate about Epic China.
That is what he said.

2:06:30 If its a 1080P screen, 2 triangle per pixel, make some compression on vertex, than you still can run this demo, no need very high bandwidth and IO like PS5.


2:08:00, SSD bandwidth (for the flying part) isn’t as that high as ppl said, not need a stricted spec SSD (decent SSD is ok).
this is very important to understand how SSD paly a role in the fly scene



In 1080p with lower quality you can run without need PS5’s SSD level.
The flying part is not SSD bandwidth intensive... not the whole demo.
He said he used a mobile RTX 2080 (RTX 2070 level) without specific the SSD in a notebook... he run only the opening part at 40fps (no resolution specified).

This translation is WRONG. Actually its rather fabricated. I can't find it anywhere. I have ran it through google translate a hundred times.

On that timeline they talk about their "ultimate goal is one triangle per pixel" and then they mention "or its a lil more than two triangle per pixel". The whole 1080p was a hypothetical. It had nothing to do with the statement earlier that they are running it on a laptop in the editor at 40 fps.

Who ever posted that is full of it. The same way they tried to claim that they didn't run the second half of the demo. Pure hogwash.
 
Last edited:

ethomaz

Banned
This translation is WRONG. Actually its rather fabricated. I can't find it anywhere. I have ran it through google translate a hundred times.

On that timeline they talk about their ultimate goal is one triangle per pixel and that they are currently at two triangle per pixel. The whole 1080p was a hypothetical. It had nothing to do with the statement earlier that they are running it on a laptop at 40 fps.

Who ever posted that is full of it. The same way they tried to claim that they didn't run the second half of the demo. Pure nonsense.
So we can ask somebody else to translate... I think you will be surprised.

2:08:00 is 128 minutes btw.
 
Last edited:

ethomaz

Banned
Does it matter if it's weaker or stronger? The only thing people can go off from, is what Jensen said in relation to the 2080 max q vs next gen consoles. We haven't heard anything from AMD. Everything else is noise, and open for interpretation. Jensen would have more insider knowledge about his own hardware compared to you, no offense if any was taken.

If you would like to continue this conversation, feel free to DM me, as I don't want to be involved in the derailment of this thread. But I also don't want misinformation to be spread either.
Next-gen consoles are leagues ahead in terms of GPU power compared with the MaxQ.

That is not even in question lol

We have real benchmark from MaxQ... it is between RTX 2060 and RTX 2070.
RX 5700 beat it easily.
 
Last edited:

yurinka

Member
The numbers are not high.. it uses that amount of triangle up to 1 billion or more I don’t know but it only show the number of pixels rendered... it is like supersampling.
As I remember what he said about that the 1B triangles was that the original assets shown in the frame originally (before importing them to the engine) had around 1B triangles combined, Then in the engine reduces them, and depending on the target where they export the game, in this case PS5, they reduce them more or less. In the case of PS5, to around 20M triangles for the PS5 demo version.

Any console or PC would explode if loading 1B triangles at the same time.
 

CobraXT

Banned
I said it before unless you have some absurdly high detailed raw assists .. you don't need more then a good ssd like what availabe in the market today .. and even then .. the game engine will reduce the assists size and polycount by a huge amount so the gpu could run it .. no point of having 8k assists for 1440p games .. and with the pitiful 800ish gig ps5 storage size they better optimize the assists size since they will need to reduce the overall game size unless sony want > 500 gb sizes for games
 

scalman

Member
but question is how many games will be made on unreal 5 ? i dont know any as of today at least. and unpgrades from unreal 4 to unreal 5 doesnt count, as lates version of the unreal 4.n something its allready almost unreal 5 anyway. So show us something more something as real as a game. tech demos are nice people see them and forget soon as well.
 
but question is how many games will be made on unreal 5 ? i dont know any as of today at least. and unpgrades from unreal 4 to unreal 5 doesnt count, as lates version of the unreal 4.n something its allready almost unreal 5 anyway. So show us something more something as real as a game. tech demos are nice people see them and forget soon as well.

Well, June 4 is approaching. Maybe we will see some UE5 games then. Maybe more in July with from MS too
 

Exodia

Banned
I posted the timestamp.

2:06:30

The 1440p 40fps part talked in Reddit is at 53:00.

He says that for the Lumen scene but never said the resolution... not Nanite part (https://forum.beyond3d.com/posts/2126263/).

Edit - Fixed.

There are no 'lumen scene', This is again another fabrication. The entire demo is running nanite and lumen from beginning to the very end. People have fabricated a narrative that beginning is lumen scene and that the flying scene at the end is nanite which can only be done on the PS5. Then they have projected it onto their so called "translation".
 

Exodia

Banned
As I remember Tim Sweeney said that a frame of the demo had around 20M triangles.
The demo was 1440p and not sure but as I remember I think it was variable resolution.
So both numbers may be variable.
In any case, if both are fixed, 1440p (2,560 x 1,440) is 3686400 pixels, so 20M/3686400=5.43 triangles/pixel
The same scene in 1080p (1920x1080) would be 2073600, so 9.65 triangles/pixel

But I don't know, these numbers are too high. I assume the 20M includes also stuff that is not being drawn, and in the same way that Geometry Engine can make culling it may also reduce the meshes depending on the native resolution that the game is being rendered.

Epic already talks about their goal being 'one triangle per pixel'. All that 20 million triangles is actually not seen.
As said in the DF article "Put simply, the scene renders through UE5 on a triangle-per-pixel basis with the user seeing only what he/she needs to see."

also "The 'one triangle per pixel' approach of UE5 was demonstrated with 30fps content, so there are questions about how good 60fps content may look."
 

psorcerer

Banned
That 'translation' if you want to call it that doesn't exist and had nothing to do with his earlier statement.

I'm not sure what statement is there.
The only statement we have is from Sweeney.
All other things are just speculations out of context.
 

Bryank75

Banned
As I remember what he said about that the 1B triangles was that the original assets shown in the frame originally (before importing them to the engine) had around 1B triangles combined, Then in the engine reduces them, and depending on the target where they export the game, in this case PS5, they reduce them more or less. In the case of PS5, to around 20M triangles for the PS5 demo version.

Any console or PC would explode if loading 1B triangles at the same time.
The start scene was 20 million triangles but they worked that up towards the statue scene, the statue had 33 million triangles alone......

The flying part was the most impressive part technically though because it is cycling through data, loading and unloading assets at the speed she is moving.



Here is an ex EA dev breaking it down, it's very good.
 

Exodia

Banned
The start scene was 20 million triangles but they worked that up towards the statue scene, the statue had 33 million triangles alone......

The flying part was the most impressive part technically though because it is cycling through data, loading and unloading assets at the speed she is moving.



Here is an ex EA dev breaking it down, it's very good.


Wrong that statue is still compressed in the engine by nanite. Its the actual 3d model that is 33 million. But its not rendering 33 million tri. Its been compressed by nanite.
Nanite is compressing billions of trillions into 20 million triangles. There's no difference between the beginning and the end of the demo as the budget for nanite is set to 20 million triangles.
 

Bryank75

Banned
Wrong that statue is still compressed in the engine by nanite. Its the actual 3d model that is 33 million. But its not rendering 33 million tri. Its been compressed by nanite.
Nanite is compressing billions of trillions into 20 million triangles. There's no difference between the beginning and the end of the demo as the budget for nanite is set to 20 million triangles.
Well, that would make sense since 20 million is more than enough for the amount of pixels on screen. I was just repeating what was said in the video.... which he didn't give context to.
 

Bryank75

Banned
Some Chinese guy claiming to be an Epic engineer..... translated.....

Yeah...right!! LOL

The desperation is palpable.
This is the problem when spec-sheets mean more to people than games!

Ghost of Tsushima and TLOU2 look better than anything that was in the Xbox showcase and they are coming out in the next two months. If Xbox had been delivering any games on that level of quality, 12 tflops and an SSD might mean something..... but you might as well give them to a baboon.
 
Some Chinese guy claiming to be an Epic engineer..... translated.....

Yeah...right!! LOL

The desperation is palpable.
This is the problem when spec-sheets mean more to people than games!

Ghost of Tsushima and TLOU2 look better than anything that was in the Xbox showcase and they are coming out in the next two months. If Xbox had been delivering any games on that level of quality, 12 tflops and an SSD might mean something..... but you might as well give them to a baboon.



xEr476M.gif



Why must everything be a console war for you guys? Like what's the point in drive by shit posting, an off topic rant. SDF got the whole squad on full active duty lately. Can we not enjoy the tech demo?

By the way it was out of the devs mouth. Watch the video or get someone to translate it.
 
Last edited:

Bryank75

Banned
xEr476M.gif



Why must everything be a console war for you guys? Like what's the point in drive by shit posting, an off topic rant. SDF got the whole squad on full active duty lately. Can we not enjoy the tech demo?
I appreciate the JR Gif!

It wasn't meant as a drive by post, it was just an observation that Xbox hasn't hit that bar for quality.

I think we all enjoyed the demo, it was on PS5..... I hope we get something cool from Xbox soon. I'm sure they have something, maybe The Initiative or Obsidian in june or july....

Did he work on the demo or UE5?
 
Last edited:
I appreciate the JR Gif!

It wasn't meant as a drive by post, it was just an observation that Xbox hasn't hit that bar for quality.

I think we all enjoyed the demo, it was on PS5..... I hope we get something cool from Xbox soon. I'm sure they have something, maybe The Initiative or Obsidian in june or july....
To be fair, if hellblade is in game, I would say it's just as impressive. The Xbox guys have been able to at least see what gameplay is like, so there's that. We still haven't seen gameplay of ps5. Let's just wait and see when there's sufficient gameplay of both, and then do comparisons. It's apples to oranges right now. Gameplay vs no gameplay.

I'll be on the sidelines like


9JETjTh.gif
 

Bryank75

Banned
To be fair, if hellblade is in game, I would say it's just as impressive. The Xbox guys have been able to at least see what gameplay is like, so there's that. We still haven't seen gameplay of ps5. Let's just wait and see when there's sufficient gameplay of both, and then do comparisons. It's apples to oranges right now. Gameplay vs no gameplay.

I'll be on the sidelines like


9JETjTh.gif
I agree, you know...you're alright!

 
To be fair, if hellblade is in game, I would say it's just as impressive. The Xbox guys have been able to at least see what gameplay is like, so there's that. We still haven't seen gameplay of ps5. Let's just wait and see when there's sufficient gameplay of both, and then do comparisons. It's apples to oranges right now. Gameplay vs no gameplay.

I'll be on the sidelines like


9JETjTh.gif
I'd say it's more impressive personally.

49903278837_fe575c28c9_o.png
 

Darak

Member
I just watched some interesting impressions from Casey Muratori (author and streamer of Handmade Hero, a low-level programmer with vast industry experience).
  • The engine seems to be implementing geometry maps (http://hhoppe.com/proj/gim/), a more advanced technique compared to normal or displacement maps. The geometry is probably not even stored as triangles, but purely as textures, and recreated then in a computation pass at an arbitrary level of detail. There is still probably a lores version of the geometry for collision, etc, but it's a clear win for artists. This is in fact not a huge innovation, but a well known technique which has been known for a while, but only now is feasible due to the improvements in memory/SSDs/etc., so we can pretty much take it for granted next gen.
  • Unreal has been traditionally weak on data streaming and the focus on the new version seems to be about fixing that, which is a good direction.
  • The GI solution has huge lag (a dozen of frames or so) and uses fading to hide it. Not a criticism, as no technology exists which would allow realtime GI without lag. In fact, the technology is impressive if it is 100% dynamic, not so much if it requires a preprocessing pass (which would make most geometry static).
  • The most impressive part of the demo, in his opinion, is the particle bugs (!), as making them work properly and without any apparent clipping over those geometry maps is hard.
  • Character animation is extremely poor in the demo, with many undesirable harsh blending and other issues (to be honest, Casey has some high standards here, as he admits those same issues are commonplace nowadays in AAA games; he wonders if animation will be the next big engineering target as rendering is reaching a level where improvements are not so evident anymore).
Some pretty interesting insights IMHO from someone who really knows this stuff. For anyone interested, the stream ( where he commented the demo is here (towards the end of the stream):

[Edit: it's subscriber only for a few days]

 
Last edited:

Kagey K

Banned
I just watched some interesting impressions from Casey Muratori (author and streamer of Handmade Hero, a low-level programmer with vast industry experience).
  • The engine seems to be implementing geometry maps (http://hhoppe.com/proj/gim/), a more advanced technique compared to normal or displacement maps. The geometry is probably not even stored as triangles, but purely as textures, and recreated then in a computation pass at an arbitrary level of detail. There is still probably a lores version of the geometry for collision, etc, but it's a clear win for artists. This is in fact not a huge innovation, but a well known technique which has been known for a while, but only now is feasible due to the improvements in memory/SSDs/etc., so we can pretty much take it for granted next gen.
  • Unreal has been traditionally weak on data streaming and the focus on the new version seems to be about fixing that, which is a good direction.
  • The GI solution has huge lag (a dozen of frames or so) and uses fading to hide it. Not a criticism, as no technology exists which would allow realtime GI without lag. In fact, the technology is impressive if it is 100% dynamic, not so much if it requires a preprocessing pass (which would make most geometry static).
  • The most impressive part of the demo, in his opinion, is the particle bugs (!), as making them work properly and without any apparent clipping over those geometry maps is hard.
  • Character animation is extremely poor in the demo, with many undesirable harsh blending and other issues (to be honest, Casey has some high standards here, as he admits those same issues are commonplace nowadays in AAA games; he wonders if animation will be the next big engineering target as rendering is reaching a level where improvements are not so evident anymore).
Some pretty interesting insights IMHO from someone who really knows this stuff. For anyone interested, the stream where he commented the demo is here (towards the end of the stream):


Subscriber only on your link.
 
2mYJGsj.jpg


I kind of sort of had an epiphany or something, and I hope this posts helps out gamers who have had a hard time understanding the utilization of SSDs in graphics fidelity. Perhaps this chart can help out (it is a very basic abstract oversimplification)

pp9ReYj.jpg


NEXT GEN Game development is now going to have SSD in mind, because now it is UTILIZED, IMPLEMENTED from the start to work in conjunction with other system components (mostly the GPU, system RAM) to increase graphics fidelity. The unreal 5.0 PS5 demo demonstrated 2 particular aspects: Streaming of level of Detail (LOD), and polygons (triangles). The approaches of XsX'sand PS5's SSD's is different but trying to achieve the same outcome (uninterrupted throughput). Both XsX and PS5 *custom* SSD's are miles ahead of current PCs. However, I had this misconception in my head (and I am thinking some gamers out there as well) that having the upper hand on SSD can mitigate and compensate for a system that lacks in:
-TFLOPS (GPU)
-AMOUNT OF RAM AND RAM BANDWIDTH
-CPU POWER

and my impression is that its just NOT true. This is truly an uncharted territory (no pun intended) to what degree SSD plays a role in graphics fidelity, because both Sony and XsX have paved a new frontier in game development which helps out developers exponentially. In above chart, the SSD circle for PS5 would be hyperinflated compared to XsX and PC, but the XsX for GPU, and RAM bandwidth would be bigger (but not hyperinflated). The CPU circle for most part would be same for XsX and PS5. The fruits/product of hyperinflating the SSD will most likely be demonstrated by Sony 1st party studios, because the unreal engine 5.0 is middleware, but at the same time customized also to make sure that the PS5 SSD is properly utilized (I am sure it will be for XsX as well). The 8 month development time with limited people working on an unfinished unreal 5.0 engine is truly amazing and gives us a very unrefined glimpse of whats to come with SSD implementation. I am sure there might be some flaws in this, and feel free to add to a better understanding in this new frontier and Video Game graphics

Arigato Gozaimasu.
 
With all the leaks that came out of China today, this might seriously be one of the best years in gaming history, for everyone.


S13Gxof.jpg



Tim said we can benchmark it and everything. Special thank you to all the doubters and naysayers.
 
Last edited:

Kagey K

Banned
2mYJGsj.jpg


I kind of sort of had an epiphany or something, and I hope this posts helps out gamers who have had a hard time understanding the utilization of SSDs in graphics fidelity. Perhaps this chart can help out (it is a very basic abstract oversimplification)

pp9ReYj.jpg


NEXT GEN Game development is now going to have SSD in mind, because now it is UTILIZED, IMPLEMENTED from the start to work in conjunction with other system components (mostly the GPU, system RAM) to increase graphics fidelity. The unreal 5.0 PS5 demo demonstrated 2 particular aspects: Streaming of level of Detail (LOD), and polygons (triangles). The approaches of XsX'sand PS5's SSD's is different but trying to achieve the same outcome (uninterrupted throughput). Both XsX and PS5 *custom* SSD's are miles ahead of current PCs. However, I had this misconception in my head (and I am thinking some gamers out there as well) that having the upper hand on SSD can mitigate and compensate for a system that lacks in:
-TFLOPS (GPU)
-AMOUNT OF RAM AND RAM BANDWIDTH
-CPU POWER

and my impression is that its just NOT true. This is truly an uncharted territory (no pun intended) to what degree SSD plays a role in graphics fidelity, because both Sony and XsX have paved a new frontier in game development which helps out developers exponentially. In above chart, the SSD circle for PS5 would be hyperinflated compared to XsX and PC, but the XsX for GPU, and RAM bandwidth would be bigger (but not hyperinflated). The CPU circle for most part would be same for XsX and PS5. The fruits/product of hyperinflating the SSD will most likely be demonstrated by Sony 1st party studios, because the unreal engine 5.0 is middleware, but at the same time customized also to make sure that the PS5 SSD is properly utilized (I am sure it will be for XsX as well). The 8 month development time with limited people working on an unfinished unreal 5.0 engine is truly amazing and gives us a very unrefined glimpse of whats to come with SSD implementation. I am sure there might be some flaws in this, and feel free to add to a better understanding in this new frontier and Video Game graphics

Arigato Gozaimasu.
The graphs are a little simplified, there is a lot more intermingling going on and I would have built them differently.

The biggest thing you miss is that last gen (PS4 and XB1)and this gen (PS5 and Series X) are relatively the same. The HDD still had to do a bunch of heavy lifting this gen (all data is fully installed on it) so the pipeline doesn’t really change in your scenario, it just gets faster.
 
Last edited:

yurinka

Member
The creators said they will explain how it works next week... the source code only in 2021 :(
Obviously not in detail, but they already explained it when interviewed by Geoff.

The start scene was 20 million triangles but they worked that up towards the statue scene, the statue had 33 million triangles alone......
Yes, but the 20M frame of the cave scene was 20M but when running in the console. The statue had 33 million but not in the console: it had that in the source asset from Zbrush, before putting it into the engine.

I assume that in all the scenes, like the one with the room full of statues, or the statue close-up or when flying also has around 20M triangles per frame.

I think that before exporting to the console, the engine reduces that 1B into something way smaller, more manageable by the console streaming. Then the console streams (that would be more or less depending on I/O speed/bandwith and RAM) whatever is going to draw in the next 1 second or so and shrinks it with Geaometry Engine into something manageable by that specific console CPU/GPU/RAM at the desired resolution and framerate: in this case I assume it's these ~20 Million triangles.

I assume that when being rendered by the console, the geometry of an object (let's say a statue) would be bigger or smaller depending on the distance from the camera: they basically take whatever is more or less seen by the camera and shrinks until they reach that triangles budget per frame: let's say 20M. Now with a SSD not only they read faster: now they have no search times, so don't need to read sequentally a chunk of data and they don't need to have it several times in the disk to reduce search times: if desired they can even go and read only a portion of it (let's say they have the statues split in 4 parts and they only stream the ones to be seen by the camera). The stream is also way more optimized than when doing it with an SSD than with a HDD.
 
Last edited:
Top Bottom