• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.

Null_Key

Neo Member
Y7To5DR.jpg
You know in the middle east they wipe there ass with there hand.
 

pawel86ck

Banned
I'm going to let experience and common sense do the work for me. There really isn't any kind of secret sauce going on nor will ever be at this stage (except Switch games). They are all using DX and XSX and PCs are pretty tightly coupled.
No secret sauce? You want to tell me we have SDD with hardware decompression on PC?
 
Last edited:
I'm going to let experience and common sense do the work for me. There really isn't any kind of secret sauce going on nor will ever be at this stage (except Switch games). They are all using DX and XSX and PCs are pretty tightly coupled.
But what about Sony who have never used Dx and not bound by any PC architecture? So you are saying their solution will be the same with just different API calls and there won't be any secret sauce on their part??
 

MarkMe2525

Banned
Dude... I'm excited for the series x after what was announced today. Seems like they address every inconvenience that our current boxes give us and bring some real well thought out hardware. Can't wait for Sony to announce so I can then try to figure out which horse to hitch my ride to. Probably Xbox but who knows.
 

semicool

Banned
You guys are taking info and running all the way around the world with it. LOL!

12TF > 16TF both using the same graphics API and the same draw calls? Yea, OK.
You forget, which as a PC guy I'm surprised, that doubling gpu performance alone doesn't mean almost double performance. It usually takes upgrading many parts, bottlenecks, to get this type of performance increase we are seeing here with gears 5. This is close to Carmack's 2x performance with a similarly spec d console vs PC that you like to make fun of but has alot of evidence for.

PCs closing that gap with Vulkan and DX12 or..draw calls is only part of that equation. Take Linux and X Windows for example, extremely poorly optimized even with Vulkan if using X Windows and not Wayland.

Console's can perform custom hardware AND software optimizations out of the PCs reach still even with Vulkan and Directx12, which is what we are seeing here.....PRE optimizing for said next gen console with a game designed from the ground up , but rather in a 2 week period from and compared to a game already running on PC.
 
Last edited:

semicool

Banned
I read that. It doesn't specify the resolution or the graphics features enabled to run 100 FPS. It reads like 2 different paragraphs (and I'm sure it is). The other question is why wouldn't the optimizations to the engine not be available for the Nvidia/AMD GPUs? UE4.0 has a LOT of frame-pacing issues. Some of the UE games with very little features are still bottlenecked. We don't see this with every graphics engine.
Yes other articles have said PC top tier Ultra settings Ultra + they called it....Plus, amongst others graphical improvements like global illumination all at 4k. Did you not read the Digital Foundry article?

Update edit: https://www.eurogamer.net/articles/digitalfoundry-2020-inside-xbox-series-x-full-specs
 
Last edited:

kyliethicc

Member
I don't get why MS is going with the weird RAM setup. Only 10GB of really fast memory for devs, and 3.5GB of slower RAM. If Sony does a full 16GB of fast RAM, 13-14 usable to devs, that would be a big advantage. Even more so if Sony goes with the rumored 16GB for devs and 2-4GB of DDR4 for the OS.

Agreed. My guess for PS5 is 16 GB GDDR6, 14 GB for devs. The other 2 GB + 1 GB DDR4 will be for the OS. If PS5 has 14 GB at 512 GB/s vs XSX 10 GB at 560 GB/s & 3.5 GB at 336 GB/s, Xbox might be at a disadvantage.
 
I don't get why MS is going with the weird RAM setup. Only 10GB of really fast memory for devs, and 3.5GB of slower RAM. If Sony does a full 16GB of fast RAM, 13-14 usable to devs, that would be a big advantage. Even more so if Sony goes with the rumored 16GB for devs and 2-4GB of DDR4 for the OS.
Saying really fast and slower means nothing without numbers, just look at hard facts and the numbers:

10GB @ 560 GB/s you said really fast but it is nearly on par with a top PC GPU-> RTX 2080TI @616 GB/s
6GB @ 336 GB/s 'slower' but still faster than both One X @326 GB/s and PS4 Pro @218GB/s and come to think of it it is even faster than original Xbox One's 'fast' ESRAM @204GB/s
 

Silver Wattle

Gold Member
I don't get why MS is going with the weird RAM setup. Only 10GB of really fast memory for devs, and 3.5GB of slower RAM. If Sony does a full 16GB of fast RAM, 13-14 usable to devs, that would be a big advantage. Even more so if Sony goes with the rumored 16GB for devs and 2-4GB of DDR4 for the OS.
Yeah, considering if they went with a 256bit bus but increased the GDDR6 frequency from 14Ghz to 18Ghz(available) they would have 16GB @ 576GB/s, all 16GB would have the same speed.
This appears to have been an area of compromise for Microsoft.
 

VFXVeteran

Banned
But what about Sony who have never used Dx and not bound by any PC architecture? So you are saying their solution will be the same with just different API calls and there won't be any secret sauce on their part??

Sony does use DX for their development. Trust me on that. ND have two different boxes and their main is using Windows with VS compiled to use their special SL wrapper which is essentially HLSL. They then port game to PS5 dev box using it's linux-like OS.

An example of secret sauce still in use today is Nintendo's Breath of the Wild.
 

VFXVeteran

Banned
You forget, which as a PC guy I'm surprised, that doubling gpu performance alone doesn't mean almost double performance. It usually takes upgrading many parts, bottlenecks, to get this type of performance increase we are seeing here with gears 5. This is close to Carmack's 2x performance with a similarly spec d console vs PC that you like to make fun of but has alot of evidence for.

PCs closing that gap with Vulkan and DX12 or..draw calls is only part of that equation. Take Linux and X Windows for example, extremely poorly optimized even with Vulkan if using X Windows and not Wayland.

Console's can perform custom hardware AND software optimizations out of the PCs reach still even with Vulkan and Directx12, which is what we are seeing here.....PRE optimizing for said next gen console with a game designed from the ground up , but rather in a 2 week period from and compared to a game already running on PC.

Not true at all. You'd have to work at a gaming studio on the development team to know what really happens behind the curtains. But continue to believe what you want until the comparisons on eurogamer come out with multiplat games. Ampere - and to a lesser extent - 2080Ti will pretty much dominate all platforms.
 
Last edited:

kyliethicc

Member
Yeah, considering if they went with a 256bit bus but increased the GDDR6 frequency from 14Ghz to 18Ghz(available) they would have 16GB @ 576GB/s, all 16GB would have the same speed.
This appears to have been an area of compromise for Microsoft.

Yeah imagine if Sony can spend the extra money and give devs 13 or 14 GB at 576 GB/s. Even 512 GB/s would be great. PS4 Pro games only have 5.5 GB at 217 GB/s. Just imagine what Sony 1st party devs can cook up with 2.5x the RAM & 2.5x bandwidth.
 

semicool

Banned
Not true at all. You'd have to work at a gaming studio on the development team to know what really happens behind the curtains. But continue to believe what you want until the comparisons on eurogamer come out with multiplat games. Ampere - and to a lesser extent - 2080Ti will pretty much dominate all platforms.
Again did you not read the Digital Foundry article? PCs are catching up to console optimzation tech from 2013. Maybe caught up now. After Xbox SX and PS5 , they'll have new optimizations and standards and enhancements to catch up on.
 
Last edited:
Yeah, considering if they went with a 256bit bus but increased the GDDR6 frequency from 14Ghz to 18Ghz(available) they would have 16GB @ 576GB/s, all 16GB would have the same speed.
This appears to have been an area of compromise for Microsoft.
They have looked at the VRAM usage at their One X console with a special hardware that wasn't announced until now, one of DF video has this bit of info, and there it is revealed how much of the VRAM is saturated and how much is just filled with unused stuff/textures DF Video explaining MS decision on using different bandwidth
 
Last edited:
RAW TFLOPS ARE WHAT CONSTITUTE A GENERATIONAL LEAP! MICROSOFT KNOWS THIS!


"12 TFLOPs was our goal from the very beginning. We wanted a minimum doubling of performance over Xbox One X to support our 4K60 and 120 targets. And we wanted that doubling to apply uniformly to all games," explains Andrew Goossen. "To achieve this, we set a target of 2x the raw TFLOPs of performance knowing that architectural improvements would make the typical effective performance much higher than 2x. We set our goal as a doubling of raw TFLOPs of performance before architectural improvements were even considered - for a few reasons. Principally, it defined an audacious target for power consumption and so defined our whole system architecture.
 

VFXVeteran

Banned
Yes other articles have said PC top tier Ultra settings Ultra + they called it....Plus, amongst others graphical improvements like global illumination all at 4k. Did you not read the Digital Foundry article?

Update edit: https://www.eurogamer.net/articles/digitalfoundry-2020-inside-xbox-series-x-full-specs

First the only 2 things that Eurogamer spoke about with ray-tracing on XSX is:

1) Contact shadows.

2) A software version of ambient occlusion (this is UE's new feature and wouldn't be needed in Nvidia's case since their cards can do the real 3D ambient occlusion - which is way more accurate).

Those 2 features aren't very heavy since #2 is essentially the same as HBAO with the RT casting the rays into the texture space instead of in the world or casting world rays based on a 2D depth map.
 

semicool

Banned
First the only 2 things that Eurogamer spoke about with ray-tracing on XSX is:

1) Contact shadows.

2) A software version of ambient occlusion (this is UE's new feature and wouldn't be needed in Nvidia's case since their cards can do the real 3D ambient occlusion - which is way more accurate).

Those 2 features aren't very heavy since #2 is essentially the same as HBAO with the RT casting the rays into the texture space instead of in the world or casting world rays based on a 2D depth map.
Talk about goal post moving. I watch and play soccer alot but this...
 
Last edited:

Zero707

If I carry on trolling, report me.
BTW who is the leaker mentied Anaconda die size is 350mm2 ? he also talked about PS5 die size
 

LED Guy?

Banned
I read that. It doesn't specify the resolution or the graphics features enabled to run 100 FPS. It reads like 2 different paragraphs (and I'm sure it is). The other question is why wouldn't the optimizations to the engine not be available for the Nvidia/AMD GPUs? UE4.0 has a LOT of frame-pacing issues. Some of the UE games with very little features are still bottlenecked. We don't see this with every graphics engine.
Nope, it was in the same line like that, they were saying "Rayner also shared that the game is already running over 100 FPS" WHILE the game is running at Native 4K with ultra settings and screen-space RT GI.

This is so impressive, it beats RTX 2080 by a considerable amount. WOW!!

I think they have optimized the engine even further and maybe that's the RDNA 2 architecture coming into play here, TFLOPS is more efficient than ever, because this right here beats the RTX 2080 Ti performance.

sdWLgcv.png


BhlZUL7.png
 
Last edited:

psorcerer

Banned
If the 2080TI can barely run Gears of War at 4K and 60FPS, I think it's amazing the XBox Series X can run it at one hundred FPS. If the PS5 is thirteen TF, then I think that we could have two systems that are true generational leaps. I did not expect such a leap in performance. At most, I was expecting performance of a 2080 Super and not 40% beyond a TI. This is absolutely stunning.

I wouldn't be surprised if it's 1440p "DLSSed" to 4K though.
On the other hand 2080Ti barely hits 100 in 1440p...
 

-kb-

Member
Sony does use DX for their development. Trust me on that. ND have two different boxes and their main is using Windows with VS compiled to use their special SL wrapper which is essentially HLSL. They then port game to PS5 dev box using it's linux-like OS.

An example of secret sauce still in use today is Nintendo's Breath of the Wild.

Sonys dev environment is VS it directly interacts with the devkits from windows. This doesn't mean they are using DirectX or even VS's compiler.
 
Yeah, considering if they went with a 256bit bus but increased the GDDR6 frequency from 14Ghz to 18Ghz(available) they would have 16GB @ 576GB/s, all 16GB would have the same speed.
This appears to have been an area of compromise for Microsoft.
I don't think I understand how the ram configuration in series x works. If you have 16GB at 576 GB/s, your maximum theoretical read speed is 576GB/s. But wouldn't having 2 pools, even if one is "only" 336GB/s allow you to access data at peak speeds higher than 576 GB/s when you are using both pools at the same time?
 

-kb-

Member
I don't think I understand how the ram configuration in series x works. If you have 16GB at 576 GB/s, your maximum theoretical read speed is 576GB/s. But wouldn't having 2 pools, even if one is "only" 336GB/s allow you to access data at peak speeds higher than 576 GB/s when you are using both pools at the same time?

Its not both pools. Think of it like this, the XSX has two 'arrays' of ram chips, one array has more capacity and the other has less.

To reach peak bandwidth you need to be able to read bytes from all of your chips at once, but because the higher capacity chips have space that doesnt exist on the lower capacity chips you can only read from all the higher capacity chips at once if your using that extra space.

tl;dr
using 10GB you get 320bits
using the other 6GB you get 192bits
 
Last edited:

psorcerer

Banned
No, they confirmed it is running at Native 4K resolution, maybe they have pushed the engine and optimized it even further, that's why.

Dunno. Seems really too good to be true.
Although I may be wrong.
Gears 5 is pretty undemanding game though, it was created with a 60fps target on this gen hardware.
I also think their SSGI looks strange, they did use some other GI model in the original version, maybe RSM from sun, like Metro Exodus.
Anyway even getting +/- 2080Ti perf in a first ever unoptimized port is amazing.
 

sneas78

Banned
I played that gears on my X. I’m not impressed with that screen. Do we need to have 4K screen or something? Looks just like in the X.
 
Status
Not open for further replies.
Top Bottom