I find it interesting that in the past week or so maths have changed such that 102 + 68 = 182 now instead of 170.![]()
How much bandwidth do you actually need for a 1080p game at 30fps given that a lot of the textures and geometry are going to be exactly the same from frame to frame?
No....No you cant. just add those two numbers together.
So... odds on Major Nelson doing a new comparison for this gen?
Even if the textures and bandwidth are exactly the same from frame to frame the entire scene still needs to be streamed from the memory that the GPU is going to be rendering from to the GPU. For you to be able to render a scene in a certain amount of time (16ms for 60fps or 33fps for 30fps) you need to be able to supply all of the assets in a scene to the GPU from whatever memory it's rendering from.
Assume for this exercise you have unlimited GPU power (which compared to the 360 and PS3 GPUs might as well be true). We also make a lot of assumptions about latency which don't really apply in this scenario and assumptions about the GPU getting exclusive access to main memory which won't happen in the real world. This scenario is simplistic and a first approximation but basically works for illustrative puposes.
Let's assume we have 2GB of assets in a scene which is what developers are starting to push toward and will certainly hit this generation, at least on the PC side.
In the case of XB1, if you have 2GB of assets in a scene and the GPU can retrieve from main memory at 68GB/sec it's going to take 29ms for the bus to get the entire scene through the GPU. So you're going to be limited to 30fps by virtue of how long it takes the GPU to physically get the scene from memory to the rendering units.
What about the eDRAM? Well there's only 32MB of it. Nothing can make a 68GB/sec bus feed an eDRAM unit to transfer faster to the GPU. Streaming data to the GPU is purely how fast you can shove data down a pipe. Caches don't mean shit. It's basically good for frame buffer and AA and that's it.
The PS4 on the other hand is going to get the assets in the scenes streamed to the GPU in 11ms. So you can target 60fps or you can increase the amount of assets in the scene, make textures bigger, more geometry, etc. Memory bandwidth is almost certainly going to be the limiting factor as scenes push past 1GB stopping the XB1 from targeting 60FPS while the PS4 will still be cruising along with enough bandwidth to spare.
It might come down to shader throughput at some point too but I think we're going to see more games this generation which are 30fps on XB1 and 60fps on PS4 just by virtue of their memory bandwidth.
I think you are doing a power of 2 to power of 10 conversion wrong somewhere. It is honestly as simple as 102+68 = 170.A 53 MB file split between eSRAM and main memory can be accessed @182GB/s. That's what the eSRAM is here for.
32MB@102GB/s = 0.000292180s
21.329MB@68GB/s = 0.000292120s
The buses are separate so they work simultaneously. So the entire 53MB file is done in 0.000292180s (the largest of the two times) and that gives you an rate @ 182.5GB/s.
My calculator seems fine from my standpoint.
Yes, it can, but that gives you 170 GB/s for only 32 MBs of data, all the rest is just 68 GB/s. So you still cannot claim 170 GB/s for all of your RAM access.![]()
According to the leaked docs, the GPU can pull data from DDR3 and eSRAM in parelell for a theoretical max of 170GB/s.
By this logic what is the wiiU's speed. Doesn't it have a similar approach with embedded eDram
you guys are assuming all the specs are exactly the same as the vgleaks posted to make this claim and none of you know that. microsoft did not reveal any clock speeds or any flop numbers to derive clock speeds from either. they did more or less confirm 768 shaders.
it's in the realm of possibility for example that a simple gpu upclock to 1ghz would boost the vgleaks diagram bw as is to 200gb/s with no other changes.
it's possible ms outright lied/fudged. it's also possible they did not.
you guys are assuming all the specs are exactly the same as the vgleaks posted to make this claim and none of you know that. microsoft did not reveal any clock speeds or any flop numbers to derive clock speeds from either. they did more or less confirm 768 shaders.
it's in the realm of possibility for example that a simple gpu upclock to 1ghz would boost the vgleaks diagram bw as is to 200gb/sy with no other changes.
it's possible ms outright lied/fudged. it's also possible they did not.
GPU clock has NOTHING to do with memory bandwidth. You are confused with FLOPs.
They didn't up anything.
They are using Micron DDR3-2133
Actually, the reason Nintendo don't release specs since Wii is the exact same reason MS didn't release GPU specs this time around: because they are worse.When people ask why Nintendo doesn't release specs? This is why. Everyone just lies anyways.
When people ask why Nintendo doesn't release specs? This is why. Everyone just lies anyways.
So... odds on Major Nelson doing a new comparison for this gen?
Are there not organisations that protect us from major nelsons BS? Advertising falsehoods is kinda scary. Also it seems the esram exists only to pretend its faster than it really is. Yes first party devs for the xbone will be able to be creative with it but it will not solve the bandwidth issues between multiplatform games.
Vgleaks has a wealth of info, likely supplied from game developers with direct access to Xbox One specs, that looks to be very accurate at this point. According to their data, there’s roughly 50GB/s of bandwidth in each direction to the SoC’s embedded SRAM (102GB/s total bandwidth). The combination of the two plus the CPU-GPU connection at 30GB/s is how Microsoft arrives at its 200GB/s bandwidth figure, although in reality that’s not how any of this works. If it’s used as a cache, the embedded SRAM should significantly cut down on GPU memory bandwidth requests which will give the GPU much more bandwidth than the 256-bit DDR3-2133 memory interface would otherwise imply. Depending on how the eSRAM is managed, it’s very possible that the Xbox One could have comparable effective memory bandwidth to the PlayStation 4. If the eSRAM isn’t managed as a cache however, this all gets much more complicated.
Are there not organisations that protect us from major nelsons BS?
break
I dont even know what to say about this. Are you serious? You do realize every company puts forth "favorable" numbers that are arguably fudged. Go back to Sony claiming PS3 had two teraflops...and making a bunch of cg vids at e3 2005 and saying they were gameplay.
5 Billion transistors!
This is kind of disingenuous. It's probably true for MS but Nintendo didn't release specs because the Wii/U wasn't intended to be sold on the promise of ray traced reflections.Actually, the reason Nintendo don't release specs since Wii is the exact same reason MS didn't release GPU specs this time around: because they are worse.
Rocket science.
shouldnt use the calcucorn for professional numbers.My calculations of a 32MB on eSRAM@102GB/S and 21.32MB on DDR3@68GB/s gives me a speed of 182.5GB/s.
shouldnt use the calcucorn for professional numbers.
Your comparison doesn't really work: the Fiat 500's 0-100 times are right there in its spec list. Sure, they don't advertise it, but they also aren't completely withholding important information.This is kind of disingenuous. It's probably true for MS but Nintendo didn't release specs because the Wii/U wasn't intended to be sold on the promise of ray traced reflections.
It's a bit like saying the reason Fiat don't tout the 500's 0-62 time is because its worse than a BMW's. It's not at all, it's because it's not built for good acceleration and nobody buys it for that reason.