Shmunter
Member
Well spotted, even if true.A LOT of reminders about how the triangle density scales with the GPU though![]()
Well spotted, even if true.A LOT of reminders about how the triangle density scales with the GPU though![]()
And it was always the same person repeatedly pointing out GPU scalability. I'll leave it as an exercise to guess who.Well spotted, even if true.
That’s the idea apparently. But I’m not sure if there is anything preventing triangle per 4 pixels for example. Less detail, but the 4K crowd will give it a tick. Also other overlayed objects can be 4K then too etc.With nanite, triangle density will be associated with pixels so higher resolution more triangles. Am I getting this wrong?
Actually if you have the same amount of geometry engines (and shader engines) in a GPU (like XSX and PS5), triangle density (particularly when there are a lot of small triangles) mainly scales with GPU frequency and outside the GPU obviously with the speed of the storage solution because triangles take up a lot of space (but compress very well).A LOT of reminders about how the triangle density scales with the GPU though![]()
A LOT of reminders about how the triangle density scales with the GPU though![]()
That’s the idea apparently. But I’m not sure if there is anything preventing triangle per 4 pixels for example. Less detail, but the 4K crowd will give it a tick. Also other overlayed objects can be 4K then too etc.
But with smart upscaling it’s likely a moot point anyway.
Don’t know about y’all but I’m binging Kitano and Kurosawa tonight.
![]()
NX Gamer chiming in.
Don’t know about y’all but I’m binging Kitano and Kurosawa tonight.
![]()
Thanks for clarification, I did go a few orders of magnitude too farI'm not a 3D artist, but that to me sounds off by several orders of magnitude, especially when you consider models are also compressed in the SSD.
You should at minimum need 3 floating point numbers per vertex. I found for example in Maya models each of those floating points are 4 bytes, so in total 1 vertex costs 12 bytes.
Then you need to map faces (triangles) to the vertices, which should cost in the region of 6 bytes each as far as I could find.
Since the overwhelming majority of triangles reuse the same vertices, we might be talking about perhaps 1.5 billion vertices for a 1 billion-triangles model, or even closer to the number of triangles (is there a formula to generalize it)?
So you should have (1B x 6) + (1.5B x 12) = 6B + 18B = about 24B bytes, or 24GB
Again, this is all uncompressed. Tools like Draco and Open3GDC can easily reduce model sizes by a factor of 30-50, so we would really be talking about 800MB.
Ecco ripoff's dev, is that you?This is an opinion of a dev on the ps5 ssd speed :
"The SSDs are obviously the biggest change, although probably less important than some people who think they're black magic are making them out to be. They won't cure cancer, but they potentially make loading a thing of the past . You can instantly transport across the map through a portal in my game with no pop-in or anything, and that's at 60 FPS.
And that's while targeting generic SATA SSDs on PC for compatibility. I don't even wanna' know what you could *need* the PS5's SSD for"
NX Gamer chiming in.
Your post started out really well. Too bad the last paragraph had to show your preferences. The engine here is amazingly innovative, both for visual fidelity as for developers. The fact at this point is, that we do not know how it runs on the XSX. To attribute this achievement solely to the PS5 is premature.The point of Nanite is if you were to spend however many seconds it took to render a frame that literally contained billions of triangles from high quality source assets, and then fed those same assets into Nanite and let it losslessly crunch them down so they GPU only had to end up rendering a fraction of them, the resultant frames would be identical pixel for pixel
There’s no point having a thousand triangles hiding behind a single pixel that will eventually have that pixel be some one dimensional RGB value if you can crunch down the geometry before that stage to almost a single triangle that ends up giving you the same RGB value.
Nanite in UE5 means you can sling your highest quality assets in and have them rendered “as if” the GPU was crunching the whole lot in real time as far as the final frame is concerned.
That means for practical purposes there really are billions or triangles in that single frame (at least in a data-structure before Nanite goes to work) and the closer you move in to take a look, the more of them you’ll see from a given asset.
That’s incredible. It means practically infinite detail, infinite draw distance, zero pop-in, zero loading. Truly next-gen in a way that slapping higher resolution and even frame rate on what we already have just isn’t.
It’s the geometry side of adding detail just simply DONE.
Was this really what Cerny meant when he said “when triangles are small...”? Was he talking literally with this technology in mind?
A lot of trying to be done to cynically dismiss what Cerny and Sweeney are saying as just marketing or even somehow political in nature.
They are both proven and successful game engine engineers.
If they can be dismissed as just being clever at marketing and misleading the public with the UE5 demo (even at the expense of alienating all other platforms/customers in the case of Epic), then gaming journalists and technodabblers and babblers here can be dismissed as knowing far less about the subject than either of these two men.
Get a load of this, NX Gamer UE5 thread on reerar.....
https://www.resetera.com/threads/nx...ts-to-come-tech-analysis.205554/post-34081008
We don’t know if the ssd plays a part in the engine so it’s best no to talk about it. Jesus Wept, even after everything Sweeney said with his own mouth. Wtf? This is officially twilight zone territory.
It's not that they are lying. They are using words in a way the emotionally hype up the masses. Look how many people think that billions of triangles are being rendered in the demo, while in reality it's around 30 million. People latch onto big numbers and big words without understanding, and then the false understanding spreads around, creating unrealistic hype. And that ultimately sells.
I take back what I said earlier. Obvious bias going on there. He wants absolute proof of any kind of usefulness of the IO system, but freely speculates about the impact of GPU parallel processing (as a given) every chance he gets and makes.Get a load of this, NX Gamer UE5 thread on reerar.....
https://www.resetera.com/threads/nx...ts-to-come-tech-analysis.205554/post-34081008
We don’t know if the ssd plays a part in the engine so it’s best no to talk about it. Jesus Wept, even after everything Sweeney said with his own mouth. Wtf? This is officially twilight zone territory.
Marketing double speak is indeed a perilous minefield. And It is upto us enthusiasts to scrutinise it and come to conclusion based on known info paired with critical thinking...Your post started out really well. Too bad the last paragraph had to show your preferences. The engine here is amazingly innovative, both for visual fidelity as for developers. The fact at this point is, that we do not know how it runs on the XSX. To attribute this achievement solely to the PS5 is premature.
And if you can't see the obvious partnership between Epic and Sony here, I don't know what to tell you. Epic is trying to make their own engine look as awesome as possible so that it gets wide adoption. Sony is trying to make the PS5 look as powerful as possible, so they sell more consoles and games.
If say Crytek would partner up with Microsoft and Xbox, the same thing would happen. Talk about the velocity architecture, the breakthroughs that MS made, advantages over PC, all of those would be brought to the forefront.
It's not that they are lying. They are using words in a way the emotionally hype up the masses. Look how many people think that billions of triangles are being rendered in the demo, while in reality it's around 30 million. People latch onto big numbers and big words without understanding, and then the false understanding spreads around, creating unrealistic hype. And that ultimately sells.
Not sure about how important VRR is, I've never had screen tearing in my whole life, and lately with my Sony tv HDMI 2.0. I only see screen tearing on youtube as they talk and show it. Maybe it's a problem with some tv's or monitors. I've played on open framerates on many games on PS4 Pro and never had an issue, and it's obvious that the performance is floating between 30-60fps and unstable yet looks clean.
Screen tearing is mostly a Xbox thing. They don't go for Vsync while PS does. That's what I noticed from a lot of Eurogamer 360 v PS3 videos and showdowns.I don't recall seeing tearing EVER, except when I played Haze on the PS3. There was this one level that I saw some of that. I've never seen this on the PS4 and since I upgraded to a Pro haven't seen it there either. I am pretty hyped for any kind of "boost mode" backwards compatibility though based on my experience from "regular" PS4 to the Pro model with Elite: Dangerous. With the PS4, there could be SOME SLIGHT slowdown in packed asteroid fields while combat was also happening. I don't see that at all on the Pro. Good times! I can't wait!
I take back what I said earlier. Obvious bias going on there. He wants absolute proof of any kind of usefulness of the IO system, but freely speculates about the impact of GPU parallel processing (as a given) every chance he gets and makes.
Marketing double speak is indeed a perilous minefield. And It is upto us enthusiasts to scrutinise it and come to conclusion based on known info paired with critical thinking...
Facts
Assumptions
- We know PS5 has double the streaming Capability of XsX
- We do not know how far the PS5 streaming architecture was pushed for the demo, however;
- We can assume Epic wanted to market their engine at its best and therefore not actively restrict their vision, why would they?
- If the above is true, they would therefore push the stream as far as it can go, the high speed flyover at the end seemingly reenforces it
- If the 5.5gig was then utilised, a system with less stream capability would require nips and tucks to fit the gameplay within its asset capability
i cannot see anything but the above to be the most plausible assessment of the presentation
Not to mention his Epic store is on PC so it’s working against his interests In part.Otherwise why not show it running on a PC?
Whats there to gain by showing it on PS5?
It debuted on Geoffs show, not at a PlayStation event. It’s a multi platform engine that started on PC. The open platform is the PC. Epic has a much bigger presence on PC.
If all it needs is more compute, stands to reason that they would show it on PC. It’s the only platform where I can just download Unreal and start messing around. And why is Tim wasting his time talking about ssd/Io if it doesn’t matter?
It just doesn’t add up. And those saying Tim got paid, lol, this dude is rich Baby, he is fuck you rich.
At bold part; More RAM usage is a viable alternative...Marketing double speak is indeed a perilous minefield. And It is upto us enthusiasts to scrutinise it and come to conclusion based on known info paired with critical thinking...
Facts
Assumptions
- We know PS5 has double the streaming Capability of XsX
- We do not know how far the PS5 streaming architecture was pushed for the demo, however;
- We can assume Epic wanted to market their engine at its best and therefore not actively restrict their vision, why would they?
- If the above is true, they would therefore push the stream as far as it can go, the high speed flyover at the end seemingly reenforces it
- If the 5.5gig was then utilised, a system with less stream capability would require nips and tucks to fit the gameplay within its asset capability
i cannot see anything but the above to be the most plausible assessment of the presentation
To those of you who have been wondering what the engine SuckerPunch uses is called. It's SPACKLE(Sucker Punch Animation & Character Kinematics Life Engine).The thing is, none of SuckerPunch's games have been powered by middleware third party engines. They built theirs for the first Sly Cooper game and have been rolling with it ever since, making steady and meaningful improvements to it as new games were being developed. This is the state the engine is now with Ghost Of Tsushima and it's mighty impressive for a late gen showcase title!!!
#SSDBoysGG boys
![]()
At bold part; More RAM usage is a viable alternative...
And I'm going to throw something else out there that people are not thinking about. Maybe the demo indeed used the full 5.5 GB/s. But this will not be achievable for retail games. Why? Because the I/O has to handle more than just the game. It has to handle the OS for example. And what if you put a second drive in there? The OS works on one, and the game installed on the other SSD also has to go through the same I/O. What if you connect a bunch of hard drives with older games on your console?
The bottom line is that developers will have to pick a reasonable throughput that guarantees no conflict with anything else that is putting any sort of strain on the I/O. If you design your game for 5.4GB/s, and some other drive decides to use 0.2GB/s at any given moment, your game WILL stutter.
And yes, the same applies for the XSX too, so the PS5 advantage does not necessarily diminish. But no one will be using the max throughput at the risk of creating conflict with other software. For HDDs, it was the same. The HDD can transfer anywhere between 50 MB/s to 150MB/s, but games generally targeted at most 20MB/s for streaming. If you don't, you WILL get performance issues.
And here's the kicker... PlayStation has VR. The Xbox does not. I'll let you think about what the implication of that is.
Assumptions
- We can assume Epic wanted to market their engine at its best and therefore not actively restrict their vision, why would they?
- If the above is true, they would therefore push the stream as far as it can go, the high speed flyover at the end seemingly reenforces it
- If the 5.5gig was then utilised, a system with less stream capability would require nips and tucks to fit the gameplay within its asset capability
Not to mention his Epic store is on PC so it’s working against his interests In part.
The simple answer is he is a nerd that sees something really cool and it excites him. He knows it’s the future path and undoubtedly wants this to be a learning opportunity to push PC architecture in this direction.
The denial there and here for some part of the users is real.Get a load of this, NX Gamer UE5 thread on reerar.....
https://www.resetera.com/threads/nx...ts-to-come-tech-analysis.205554/post-34081008
We don’t know if the ssd plays a part in the engine so it’s best no to talk about it. Jesus Wept, even after everything Sweeney said with his own mouth. Wtf? This is officially twilight zone territory.
Nobody knows the reason except Epic themselves.My theory is still that Sony promised Epic that Sonys upcoming games for PC will be exclusive to EGS.. or that Sony paid Epic a decent amount of money for this partnership.
I very much doubt Epic would spend thousands of hours producing a tech demo of this sort, just because the CEO is a "nerd" who gets "excited" about this stuff.
It has, don't worry.Knowing that what was seen in the UE5 demo did not fit in RAM and should be sent directly by Streaming ... I think it should have been clear since the SSD (and all the IO) will be quite relevant. It matters little how much RAM XSX may have. The IO will make a difference. It is a different architecture, and there is nothing like it. Therefore, it cannot be compared with anything from the past, nor can it be programmed as in the past. Pretending to make any kind of comparison is rubbish. I understand the need, but it cannot be done.
Only the results can be compared (yes), and they can be better, the same or worse, but the way of doing it is totally different. The basis of both may seem the same, but at the same time it is not. Many will find it difficult to understand this, but they will realize it over the years. PS5 is already a benchmark in the creation of new architectures on PC. Too many bottlenecks are on PC today, and they are fated to go away. PS5 has none. Even the size of the SSD is designed for it. It is not chosen at random.
I hope Google has done a decent translation.
Gotcha.Knowing that what was seen in the UE5 demo did not fit in RAM and should be sent directly by Streaming ... I think it should have been clear since the SSD (and all the IO) will be quite relevant. It matters little how much RAM XSX may have. The IO will make a difference. It is a different architecture, and there is nothing like it. Therefore, it cannot be compared with anything from the past, nor can it be programmed as in the past. Pretending to make any kind of comparison is rubbish. I understand the need, but it cannot be done.
Only the results can be compared (yes), and they can be better, the same or worse, but the way of doing it is totally different. The basis of both may seem the same, but at the same time it is not. Many will find it difficult to understand this, but they will realize it over the years. PS5 is already a benchmark in the creation of new architectures on PC. Too many bottlenecks are on PC today, and they are fated to go away. PS5 has none. Even the size of the SSD is designed for it. It is not chosen at random.
I hope Google has done a decent translation.
Knowing that what was seen in the UE5 demo did not fit in RAM and should be sent directly by Streaming ... I think it should have been clear since the SSD (and all the IO) will be quite relevant. It matters little how much RAM XSX may have. The IO will make a difference. It is a different architecture, and there is nothing like it. Therefore, it cannot be compared with anything from the past, nor can it be programmed as in the past. Pretending to make any kind of comparison is rubbish. I understand the need, but it cannot be done.
Only the results can be compared (yes), and they can be better, the same or worse, but the way of doing it is totally different. The basis of both may seem the same, but at the same time it is not. Many will find it difficult to understand this, but they will realize it over the years. PS5 is already a benchmark in the creation of new architectures on PC. Too many bottlenecks are on PC today, and they are fated to go away. PS5 has none. Even the size of the SSD is designed for it. It is not chosen at random.
I hope Google has done a decent translation.
Gotcha.
Still, I don't see multiplats take full advantage of this. We'll see.
Whenever Alex speaks:Get a load of this, NX Gamer UE5 thread on reerar.....
https://www.resetera.com/threads/nx...ts-to-come-tech-analysis.205554/post-34081008
We don’t know if the ssd plays a part in the engine so it’s best no to talk about it. Jesus Wept, even after everything Sweeney said with his own mouth. Wtf? This is officially twilight zone territory.
Dude, this guy is on another level of damage control, he is waiting for Phil to pet his head for damage controlling
I think your right, no reason to have it more complicated than that. Except more finesse to where higher and lower assets get deployed in a scene to maximise visuals that count.About that underlined part: Why not just reduce asset quality then or "simply" have some kind of configvalue at which resolution the asset has to be streamed.
Say you'll develop a game for pc masterrace xx.xxxx$ PC Setup. The assetquality streamd shall be 100% - now the games running on console x: Stream Asset at 80 % quality , console y: 75% quality.
Guess that could be possible? Maybe they'd have to reimport the assets and lower the quality for each asset manually but yeah something along those line should be doable.
There will probably be much smarter solutions to achive nearly same quality/performance by adjusting some things.
Out of context, lol.Taking out of context picture can paint whatever you want it to. I'm excited for next-gen, and "that awful site" is Era, which can be a truly awful site. I'm excited for next-gen, more so after today's tech demo.
I didn't hear about June 10 for Series S. Where did that one come from?So June 4th for full PS5 and it's games blowout and June 10th for revealing the Xbox Series S (4tf lockhart) Almost confirmed! June is going to be a big freaking month!
Whenever Alex speaks:
![]()
Yep add Ryan McCringy to the list too.
Dude, this guy is on another level of damage control, he is waiting for Phil to pet his head for damage controlling
Edit: and here it is....lunacy that DF didn’t cover the ssd aspect and how it works with the engine. I mean Sweeney and co went on and on about how pivotal it is to all this. Lol, it’s conspiratorial but it’s as if there is a concerted effort to take focus off the ssd, I mean seriously.
Let him realize these 100GB yet need to be streamed at 2.4GB/s and the normal uncompressed speeds of PS5 without Kraken is already quicker than BCPack can archive.Again with the 100GB
![]()
Still an overlooked quote of the interview imo, I remember Cerny saying the exact same thing during GDC about how they could have an SSD so fast that it would stream data as the character is turning/moving and after that GDC talk some tech youtubers like DF doubted it because “cmon, it can’t be that fast”. Not a dig at them because we didn’t see an actual application of it at the time."Nanite allowed the artists to build a scene with geometric complexity that would have been impossible before, there are tens of billions of triangles in that scene and we couldn't simply have them all in memory at once and what we end up needing to do is streaming in triangles as the camera moves around the environment and the I/O capabilities of the PS5 are what allow us to achieve that level of realism."
Again with the 100GB
![]()