Null_Key
Neo Member
You know in the middle east they wipe there ass with there hand.
You know in the middle east they wipe there ass with there hand.
No secret sauce? You want to tell me we have SDD with hardware decompression on PC?I'm going to let experience and common sense do the work for me. There really isn't any kind of secret sauce going on nor will ever be at this stage (except Switch games). They are all using DX and XSX and PCs are pretty tightly coupled.
I read it.
12tf of normal stuff and another 13tf from RT hardware = it's like 25tf when running RT games.
Awesome. Good to see DF get first coverage of SeX and tell gamers.
But what about Sony who have never used Dx and not bound by any PC architecture? So you are saying their solution will be the same with just different API calls and there won't be any secret sauce on their part??I'm going to let experience and common sense do the work for me. There really isn't any kind of secret sauce going on nor will ever be at this stage (except Switch games). They are all using DX and XSX and PCs are pretty tightly coupled.
It's from demolition man movie:
Btw you asked a good question... after 27 years there's still no answer to that.![]()
You forget, which as a PC guy I'm surprised, that doubling gpu performance alone doesn't mean almost double performance. It usually takes upgrading many parts, bottlenecks, to get this type of performance increase we are seeing here with gears 5. This is close to Carmack's 2x performance with a similarly spec d console vs PC that you like to make fun of but has alot of evidence for.You guys are taking info and running all the way around the world with it. LOL!
12TF > 16TF both using the same graphics API and the same draw calls? Yea, OK.
Yes other articles have said PC top tier Ultra settings Ultra + they called it....Plus, amongst others graphical improvements like global illumination all at 4k. Did you not read the Digital Foundry article?I read that. It doesn't specify the resolution or the graphics features enabled to run 100 FPS. It reads like 2 different paragraphs (and I'm sure it is). The other question is why wouldn't the optimizations to the engine not be available for the Nvidia/AMD GPUs? UE4.0 has a LOT of frame-pacing issues. Some of the UE games with very little features are still bottlenecked. We don't see this with every graphics engine.
Correct. There is no ASIC decompressor secret sauce on the PC. You'll need the equivalent of 3950X to brute force next-gen games. CPU-centric vs specialized silicon philosophy. It's been that way since the 80s.No secret sauce? You want to tell me we have SDD with hardware decompressaion on PC?
I don't get why MS is going with the weird RAM setup. Only 10GB of really fast memory for devs, and 3.5GB of slower RAM. If Sony does a full 16GB of fast RAM, 13-14 usable to devs, that would be a big advantage. Even more so if Sony goes with the rumored 16GB for devs and 2-4GB of DDR4 for the OS.
Saying really fast and slower means nothing without numbers, just look at hard facts and the numbers:I don't get why MS is going with the weird RAM setup. Only 10GB of really fast memory for devs, and 3.5GB of slower RAM. If Sony does a full 16GB of fast RAM, 13-14 usable to devs, that would be a big advantage. Even more so if Sony goes with the rumored 16GB for devs and 2-4GB of DDR4 for the OS.
It's on PC as well. You want to compare Ori to The Last of Us 2?
Yeah, considering if they went with a 256bit bus but increased the GDDR6 frequency from 14Ghz to 18Ghz(available) they would have 16GB @ 576GB/s, all 16GB would have the same speed.I don't get why MS is going with the weird RAM setup. Only 10GB of really fast memory for devs, and 3.5GB of slower RAM. If Sony does a full 16GB of fast RAM, 13-14 usable to devs, that would be a big advantage. Even more so if Sony goes with the rumored 16GB for devs and 2-4GB of DDR4 for the OS.
But what about Sony who have never used Dx and not bound by any PC architecture? So you are saying their solution will be the same with just different API calls and there won't be any secret sauce on their part??
You forget, which as a PC guy I'm surprised, that doubling gpu performance alone doesn't mean almost double performance. It usually takes upgrading many parts, bottlenecks, to get this type of performance increase we are seeing here with gears 5. This is close to Carmack's 2x performance with a similarly spec d console vs PC that you like to make fun of but has alot of evidence for.
PCs closing that gap with Vulkan and DX12 or..draw calls is only part of that equation. Take Linux and X Windows for example, extremely poorly optimized even with Vulkan if using X Windows and not Wayland.
Console's can perform custom hardware AND software optimizations out of the PCs reach still even with Vulkan and Directx12, which is what we are seeing here.....PRE optimizing for said next gen console with a game designed from the ground up , but rather in a 2 week period from and compared to a game already running on PC.
MS was right the whole time. As they said last summer, their next gen system will be 4x the power!MS did it. 25 TF of power
Yeah, considering if they went with a 256bit bus but increased the GDDR6 frequency from 14Ghz to 18Ghz(available) they would have 16GB @ 576GB/s, all 16GB would have the same speed.
This appears to have been an area of compromise for Microsoft.
Again did you not read the Digital Foundry article? PCs are catching up to console optimzation tech from 2013. Maybe caught up now. After Xbox SX and PS5 , they'll have new optimizations and standards and enhancements to catch up on.Not true at all. You'd have to work at a gaming studio on the development team to know what really happens behind the curtains. But continue to believe what you want until the comparisons on eurogamer come out with multiplat games. Ampere - and to a lesser extent - 2080Ti will pretty much dominate all platforms.
They have looked at the VRAM usage at their One X console with a special hardware that wasn't announced until now, one of DF video has this bit of info, and there it is revealed how much of the VRAM is saturated and how much is just filled with unused stuff/textures DF Video explaining MS decision on using different bandwidthYeah, considering if they went with a 256bit bus but increased the GDDR6 frequency from 14Ghz to 18Ghz(available) they would have 16GB @ 576GB/s, all 16GB would have the same speed.
This appears to have been an area of compromise for Microsoft.
Yes other articles have said PC top tier Ultra settings Ultra + they called it....Plus, amongst others graphical improvements like global illumination all at 4k. Did you not read the Digital Foundry article?
Update edit: https://www.eurogamer.net/articles/digitalfoundry-2020-inside-xbox-series-x-full-specs
3080Ti will pretty much dominate all platforms.
MS did it. 25 TF of power
Talk about goal post moving. I watch and play soccer alot but this...First the only 2 things that Eurogamer spoke about with ray-tracing on XSX is:
1) Contact shadows.
2) A software version of ambient occlusion (this is UE's new feature and wouldn't be needed in Nvidia's case since their cards can do the real 3D ambient occlusion - which is way more accurate).
Those 2 features aren't very heavy since #2 is essentially the same as HBAO with the RT casting the rays into the texture space instead of in the world or casting world rays based on a 2D depth map.
this man said "beacon of technical accomplishment". Like why do they slurp Microsoft so much, really weirds me out...
MS did it. 25 TF of power
Seek help.
MS did it. 25 TF of power
Nope, it was in the same line like that, they were saying "Rayner also shared that the game is already running over 100 FPS" WHILE the game is running at Native 4K with ultra settings and screen-space RT GI.I read that. It doesn't specify the resolution or the graphics features enabled to run 100 FPS. It reads like 2 different paragraphs (and I'm sure it is). The other question is why wouldn't the optimizations to the engine not be available for the Nvidia/AMD GPUs? UE4.0 has a LOT of frame-pacing issues. Some of the UE games with very little features are still bottlenecked. We don't see this with every graphics engine.
BTW who is the leaker mentied Anaconda die size is 350mm2 ? he also talked about PS5 die size
AquariusZi told Xbox Series X is 350mm2 and Oberon 300mm2What did he say of the PS5 die size?
If the 2080TI can barely run Gears of War at 4K and 60FPS, I think it's amazing the XBox Series X can run it at one hundred FPS. If the PS5 is thirteen TF, then I think that we could have two systems that are true generational leaps. I did not expect such a leap in performance. At most, I was expecting performance of a 2080 Super and not 40% beyond a TI. This is absolutely stunning.
Sony does use DX for their development. Trust me on that. ND have two different boxes and their main is using Windows with VS compiled to use their special SL wrapper which is essentially HLSL. They then port game to PS5 dev box using it's linux-like OS.
An example of secret sauce still in use today is Nintendo's Breath of the Wild.
Haha i exposed you hahaha
No, they confirmed it is running at Native 4K resolution, maybe they have pushed the engine and optimized it even further, that's why.I wouldn't be surprised if it's 1440p "DLSSed" to 4K though.
On the other hand 2080Ti barely hits 100 in 1440p...
I don't think I understand how the ram configuration in series x works. If you have 16GB at 576 GB/s, your maximum theoretical read speed is 576GB/s. But wouldn't having 2 pools, even if one is "only" 336GB/s allow you to access data at peak speeds higher than 576 GB/s when you are using both pools at the same time?Yeah, considering if they went with a 256bit bus but increased the GDDR6 frequency from 14Ghz to 18Ghz(available) they would have 16GB @ 576GB/s, all 16GB would have the same speed.
This appears to have been an area of compromise for Microsoft.
AquariusZi told Xbox Series X is 350mm2 and Oberon 300mm2
I don't think I understand how the ram configuration in series x works. If you have 16GB at 576 GB/s, your maximum theoretical read speed is 576GB/s. But wouldn't having 2 pools, even if one is "only" 336GB/s allow you to access data at peak speeds higher than 576 GB/s when you are using both pools at the same time?
No, they confirmed it is running at Native 4K resolution, maybe they have pushed the engine and optimized it even further, that's why.
if indeed die is 50mm2 smaller i could see them aiming for 2.0 GhzIf that turns out to be the actual die size of the final PS5 APU, I don’t see the PS5 overtaking the XSX in terms of power. They might match it at most.